Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Elk 3.0 on Linux

110 views
Skip to first unread message

The Almighty Root

unread,
Apr 24, 2000, 3:00:00 AM4/24/00
to
Has anyone had success at getting Elk 3.0 to run on Linux 2.2.x kernels
with glibc 2.1 (libc 6)? The i486-linux-gcc configuration file is
woefully out of date wrt libc 6 systems, and doesn't understand Linux's
ELF (it supports a.out only and no dynamic linking). I'm quite sure
that all of the features that it wants are available, some in multiple
forms, but I've little clue how to configure for them. I've been using
UN*X boxen for 8 or 10 years now, but it's not every day that I need to
know how a SIGBUS handler can get the address of a faulted reference.
Nor do I usually ever care. Hell, I'd rather have a system that never
faulted on memory references because you couldn't chase pointers to
nonexistent locations. I'd rather not even care about pointers very
often, truthfully.

Hey, what do you know! I've got all that already. Too bad there's
no operating system written in my favorite language...

'james

Rob Warnock

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
<ja...@fredbox.com> wrote:
+---------------

| I've been using UN*X boxen for 8 or 10 years now, but it's not every day
| that I need to know how a SIGBUS handler can get the address of a faulted
| reference. Nor do I usually ever care. Hell, I'd rather have a system
| that never faulted on memory references because you couldn't chase pointers
| to nonexistent locations.
+---------------

That's not why Elk needs the SIGBUS handler stuff. Elk's generational
garbage collector uses the VM system's write-protection as a "write
barrier", and thus needs to be able to recover from a write attempt
to a read-only page, flag the page as needing scanning for pointers
to younger generations during the next GC, change the page to "writable",
then restart the failing write so that it can complete successfully.

If you just want to get Elk running without bothering to figure out
the SIGBUS stuff, you can put the line "generational_gc=no" in the
"site" file, and rebuild. That will use the stop&copy collector...


-Rob

-----
Rob Warnock, 41L-955 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. PP-ASEL-IA
Mountain View, CA 94043


Paolo Amoroso

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
On 24 Apr 2000 19:53:22 -0800, The Almighty Root <ja...@fredbox.com> wrote:

> Hey, what do you know! I've got all that already. Too bad there's
> no operating system written in my favorite language...

Someone is investigating the possibility of doing something similar:

Schema
http://mailhost.integritysi.com/mailman/listinfo/schema


Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/

Riku Saikkonen

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
The Almighty Root <ja...@fredbox.com> writes:
>Has anyone had success at getting Elk 3.0 to run on Linux 2.2.x kernels
>with glibc 2.1 (libc 6)? The i486-linux-gcc configuration file is
>woefully out of date wrt libc 6 systems, and doesn't understand Linux's
>ELF (it supports a.out only and no dynamic linking). I'm quite sure

There is a Debian GNU/Linux package for Elk that works on my glibc 2.1
Debian (potato) system. You might want to check the diffs for that:
<URL:ftp://ftp.debian.org/pub/debian/dists/potato/main/source/devel/>
or thereabouts. elk-something.diff.gz is a diff to be applied to the
original source (elk-something.tar.gz).

(If you actually have a Debian system, just say "apt-get install elk"...)

--
-=- Rjs -=- r...@lloke.dna.fi

David Rush

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
Paolo Amoroso <amo...@mclink.it> writes:
> On 24 Apr 2000 19:53:22 -0800, The Almighty Root <ja...@fredbox.com> wrote:
> > Hey, what do you know! I've got all that already. Too bad there's
> > no operating system written in my favorite language...
>
> Someone is investigating the possibility of doing something similar:
> Schema http://mailhost.integritysi.com/mailman/listinfo/schema

Well, I should let the perpetrators speak for themselves, but the list
has been very quiet lately (like several months).

It's not *that* close to an OS, at least IMNSHO. They're more trying
to buile the Lisp-Machine user environment to run on top of a
GNU/Linux core. The last thing I remember from the list traffic was
discussion on what sort of Scheme platform to use. There seemed to be
a fair amount of favor for PreScheme...

david rush
--
This space intentionally left blank

The Almighty Root

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
Paolo Amoroso <amo...@mclink.it> writes:

> On 24 Apr 2000 19:53:22 -0800, The Almighty Root <ja...@fredbox.com> wrote:
>
> > Hey, what do you know! I've got all that already. Too bad there's
> > no operating system written in my favorite language...
>
> Someone is investigating the possibility of doing something similar:
>
> Schema
> http://mailhost.integritysi.com/mailman/listinfo/schema

This is a small world... I created that particular list and Schema is
my project. I have remade that list in a new location (I no longer work
for that company and their mailserver has problems now that I'm not
maintaining it) but I never got around to updating anything or moving
the archives (or subscriptions for that matter) because I haven't had
anything to report on the progress.

Actually I'm reconsidering the entire design once again, this time for
pragmatic reasons. To tell the truth, someone (Shriram Krishnamurthi?
I've forgotten who...) made the wise comment that we don't really need
YA Scheme implementation, and this woke me up so I'm shopping for ideas
right now.

Basically what I'm thinking of doing, at least to get some quick
headway, is to utilize the hardware support that both X and the Linux
kernel give. Both of these software systems provide a vast amount of
support for different hardware devices and platforms and I don't want to
remake the wheel when it comes to implementing the low-level device and
architecture support for an operating system. I'm sure that at some
point a successful Scheme-based operating system has to consider this,
but I'm not going to start with bare metal because it takes so long to
obtain usable results. I've realized that the Unix design, although a
sad excuse for a real operating system, does happen to provide an
excellent skeleton infrastructure for higher-level operating system (or
better, `operating environment') implementation.

The most irritating part of Unix IMHO is not the design of the kernel
(yeah, yeah it's a monolithic spaghetti ball) or the functionality of
system calls (yeah, yeah, no PCLSRing) or the unrecoverability of kernel
panics, or whatever else is associated with the kernel and driver
implementations. What's *really* irritating about the Unix design is
all the institutionalized crufty software still floating around after
thirty years of development, redesign, and redevelopment. Unix hackers
have long spent time hacking on the hardware support, improving process
scheduling, memory management, and the like, but they still live with an
interface that feels just like 2.9BSD on a PDP-11/40, with some frills.
It's disgusting. Everything from the init process on upwards is
institutionalized, designed just like it was on the good old
minicomputers.

(I'm not degrading the Unix (or UN*X as it were) of that era, nor the
machines it ran on, many of which I'm enamored of and wish I could own.
I'm criticizing the stubborness of an operating system that dates from
that era and appears to be little changed from it.)

Runlevels are a supreme example of what I'm talking about. What the
hell is a runlevel, really? If I check the manual page for init(8)
("man init" -- how obvious is that?) I read:

"A _runlevel_ is a software configuration of the system which allows
only a selected group of processes to exist. The processes spawned by
init for each of these runlevels are defined in the #/etc/inittab# file.
Init can be in one of eight runlevels: 0-6 and S or s. The runlevel is
changed by having a priveleged user run #telinit#, which sends
appropriate signals to #init#, telling it which runlevel to change to."
[Linux System Administrator's Manual, init(8)]

Now all that init really does, it seems, (at least sysvinit, the init
package used on my Linux box, but most other init packages for Unices
are similar) is spawn some gettys and a few programs to handle signals
(like shutting down on C-M-Delete), and run a big nasty wad of shell
scripts (the heinous, unmaintainable crap in the /etc/rc.d directory).
You have the choice to boot the machine into a certain runlevel that
will necessitate the running of a horde of incomprehensible scripts that
either spew senseless `informational' error messages on the screen at a
rate so fast on newer machines that you can't read them, or that spew
green ASCII art on the screen through the use of yet more arcane and
incomprehensible scripts that aren't even documented. Sometimes the
latter scripts even spew little red ASCII art to tell you that something
went wrong, but the little red ASCII art scrolls off the screen so fast
that you can't tell what happened by the time a getty takes over the
console and obliterates what information might be left or still saved in
the console's scrollback buffer. And if you want to fix such a problem
you have to wade through logs to find what happened, interpret the
arcane error message ("modprobe: modprobe: Can't locate module
sound-service-0-0\naumix-minimal: aumix: error opening mixer"), guess
which of the N scripts generated this message, find the offending line
in that script, hunt for the variables that the script inherited from
some other script that ran it which inherited those variables from the
previous script that found them in some script consisting entirely of
variables which are passed to the offending program giving it the
completely wrong idea about reality that it is suffering from, wade
through five manual pages with no index other than a raw string search,
discover the appropriate argument to some command line option by sheer
luck, change the original file that the variables came from to the
correct the mistaken assumption that the script writer had about your
machine, simulate actually running the script with the correct arguments
by running the program in question with a hand written series of options
gleaned from the scripts being modified, and hope like hell the whole
mess works the next time the machine is rebote because some stupid web
browser wanked out and wedged all the fscking input devices.

It's rather patently obvious that a vastly better design is possible
than the current init cruft. Most any design for init would be better
than such a mess. I'm not suggesting something even more inane, like
the garbage that Windos NT has foisted off on the public, with its
little window in an itty-bitty font that contains supposed `services'
(Unix daemons) that you click on and press buttons that frequently don't
do the same thing they advertise. That's just as bad if not worse than
Unix init.

I know that it's very impolite to criticize a design without offering
some sort of alternative. So how about this. Init, when started,
spawns off a handful of processes and manages them (keeping them alive,
killing them, restarting them, etc.). These processes are daemons
dedicated to some major subsystem providing necessary functionality for
the system. Such a daemon itself manages other daemons that it spawns
off, each of which implements the actual functionality of a service. So
we have an init which starts the supervisory daemon (superdaemon) for
networking, which subsequently spawns the TCP/IP daemon (aka inetd)
which then handles all subsequent TCP/IP services. If we change the
configuration file for the TCP/IP daemon and wish it to reload its
configuration we inform the networking superdaemon which performs the
necessary magic on the TCP/IP daemon to achieve that end. If for some
reason the TCP/IP daemon gets killed then the networking superdaemon
will check to see how it died and will send the appropriate log message
(which is written out to a log by the logging daemon which also informs
the administrator through the appropriate channels) as well as
restarting it. Init itself has a configuration file determining which
services should be started and how they should be managed, and can take
a command line argument which disables all of them and drops into the
equivalent of single user mode. Each superdaemon has an appropriate
configuration file which instructs it on what daemons to start with what
arguments and how to manage each of them individually. Every daemon has
six things that can be done to it by its superdaemon -- stop it, start
it, cycle it (stop and restart), pause it, reconfigure it (tell it to
reload its configuration files), and obtain its current running status.

Of course, all of this structure should be built in Scheme. It should
be integrated into coherent parts, all of which access each other
through well-defined interfaces and operate in a simple, predictable
manner. Nothing like the ad hoc mudball of the current init.


That's my current project. If I can replace the init structure that
exists with something more sensible and easy to configure and maintain
then I'll have made a big step forward towards a complete Scheme-based
system running on the Linux kernel. From there I suppose a Scheme-based
command interpreter will be necessary, some sort of Scheme interpreter
with a special mode for a terse, low-parenthesization syntax. From
there perhaps a rewrite and redesign of the usual Unix utilities,
replacing those which are a pure sop for shell programming with loadable
Scheme libraries to do various utilitarian tasks. And then perhaps an
Emacs-like editor in Scheme (which can probably be ripped off). From
there the GUI programs need to be written, such as an integrated
Scheme-based window manager, a web browser, and other interesting parts.
Then perhaps the slow, steady replacement of the existing libraries and
development tools with Scheme-based tools. Then a portability library
and tools to integrate the other Unix utilities and programs which are
more difficult to replace, such as TeX, GCC, media applications (eg Xmms
and RealPlayer(tm)), and so forth into the blue, blue sky.

This plan is as ambitious as a Scheme-based operating system implemented
atop bare metal, but results of this plan will become usable much sooner
than a bare metal implementation.

The first problem I'm currently facing is whether to design and
implement a new Scheme that will provide both a good compiler (we are
after all making system programs here) and the requisite FFI and support
for Linux system calls, or whether I can find one that with some hacking
will fit my needs. If indeed I do find the right Scheme implementation
I'll need to learn how its guts work so I can hack on it to add the
features I'll need. I'll also need to find a compiler if the right
implementation is lacking one and adapt it to that implementation. Then
I'll have to start testing the syscalls and make Scheme stubs for FFI
calls to the various libraries I'll be using. Then things will begin to
happen.

This is why I was curious about getting Elk running. It seemed like it
might have enough support to get started with. Scsh looks suitable as
well, but I've never heard of a native compiler for S48 and I'm somewhat
wary of implementing one for it. Guile looked might it have promise at
first, but I've been getting bad vibes from the documentation. MIT
Scheme might have the necessary functionality as well, but I haven't
looked too closely at it. I also haven't gotten around to examining
MzScheme, which might show promise as well.

So can anyone offer suggestions as to Scheme choices? Experience with
the various FFIs and compilers? Ideas and concerns about implementing
native compilers for the various Schemes? I won't kid myself about the
largeness of this undertaking and how much work it will take to
implement...

If you think I'm crazy then go ahead and say it, but that doesn't mean
I'm going to listen very closely.

'james

thi

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
The Almighty Root <ja...@fredbox.com> writes, among other things:

> I know that it's very impolite to criticize a design without offering

> some sort of alternative. So how about this. [snip]

the design you propose sounds remarkably similar to the design used for
many unix systems already. perhaps it is not redesign that you desire?

> This is why I was curious about getting Elk running. It seemed like it
> might have enough support to get started with. Scsh looks suitable as
> well, but I've never heard of a native compiler for S48 and I'm somewhat
> wary of implementing one for it. Guile looked might it have promise at
> first, but I've been getting bad vibes from the documentation. MIT
> Scheme might have the necessary functionality as well, but I haven't
> looked too closely at it. I also haven't gotten around to examining
> MzScheme, which might show promise as well.

guile docs give me strange vibes, too. perhaps this tutorial can help:

http://freespace.virgin.net/david.drysdale/guile/tutorial.html

> If you think I'm crazy then go ahead and say it, but that doesn't mean
> I'm going to listen very closely.

lucky for you there are enough lunatics out there already doing pieces
of what you want to do. if you can munge the nature of your craziness
from implementation to integration, you might get results faster.

thi

The Almighty Root

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
thi <t...@netcom.com> writes:

> The Almighty Root <ja...@fredbox.com> writes, among other things:
>

> > I know that it's very impolite to criticize a design without offering

> > some sort of alternative. So how about this. [snip]
>
> the design you propose sounds remarkably similar to the design used for
> many unix systems already. perhaps it is not redesign that you desire?

It is to a certain extent but I'm trying very hard to regularize it as
much as possible. Perhaps to the extent of making wrappers for
daemons, or even replacing them. Though one daemon may like a SIGHUP
to tell it to reload its configuration and another one likes a SIGUSR1
for the same the interface to them both will be completely the same.
And the administrator/user won't ever send signals to daemons by hand,
but will use the provided channels. The only time an administrator
should need to send signals with kill (other than for broken user
processes) would be if a daemon has wedged itself horribly. In that
case the superdaemon would respawn the appropriate daemon without even
thinking.

That entire spiel was, I should note, a one-off. I just thought it
out while composing that message, although the idea had been bouncing
around in my head for the last week. Suggestions are certainly
welcome, especially before I start writing anything, while my opinions
are still malleable. Any other people have init-replacement ideas?



> guile docs give me strange vibes, too. perhaps this tutorial can help:
>
> http://freespace.virgin.net/david.drysdale/guile/tutorial.html

That's an interesting tidbit. It however reinforces my belief that
guile isn't meant to be used except as an extension to a C program. I
don't want any C floating around except perhaps within the Scheme
implementation itself. That I'd have to call the interpreter and feed
it functions all in C leaves a bad taste in my mouth.

This is the same reason why I don't hack on SCWM, although I use it
incessantly. Because to hack on it requires extensive knowledge of
the C implementation and the Xlib cruft that lurks inside it. I
wouldn't mind Xlib programming if I didn't have to use C. Using
Scheme as an extension language for an already written program is a
Guile mindset that I don't share. And don't like.

Oh, for a free Scheme compiler that produces independently linkable
and executable native object code... (And implements all of R5RS!)

> > If you think I'm crazy then go ahead and say it, but that doesn't mean
> > I'm going to listen very closely.
>

> lucky for you there are enough lunatics out there already doing pieces
> of what you want to do. if you can munge the nature of your craziness
> from implementation to integration, you might get results faster.

I had this idea. I figure that by showing some initiative on a hard
part of this project and by manufacturing tools and an implementation
base I'd get enough people interested to start writing new software
and integrating existing software.

But first I've got to get those tools made and the base started...

'james

The Almighty Root

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
r...@lloke.dna.fi (Riku Saikkonen) writes:

> The Almighty Root <ja...@fredbox.com> writes:
> >Has anyone had success at getting Elk 3.0 to run on Linux 2.2.x kernels
> >with glibc 2.1 (libc 6)? The i486-linux-gcc configuration file is
> >woefully out of date wrt libc 6 systems, and doesn't understand Linux's
> >ELF (it supports a.out only and no dynamic linking). I'm quite sure
>
> There is a Debian GNU/Linux package for Elk that works on my glibc 2.1
> Debian (potato) system. You might want to check the diffs for that:
> <URL:ftp://ftp.debian.org/pub/debian/dists/potato/main/source/devel/>
> or thereabouts. elk-something.diff.gz is a diff to be applied to the
> original source (elk-something.tar.gz).

Thank you very much. It's actually compiling now as I write this,
which is much better than a bunch of make errors.

'james

David Rush

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
The Almighty Root <ja...@fredbox.com> writes:
> thi <t...@netcom.com> writes:
> > The Almighty Root <ja...@fredbox.com> writes, among other things:

> Oh, for a free Scheme compiler that produces independently linkable
> and executable native object code... (And implements all of R5RS!)

It's called Bigloo. I mentioned this to you back in the days of the
Schema mailing list, but nobody seemed interested. Bigloo is a
Scheme->C with a very C-friendly FFI (as in you don't need to wrap
existing C libraries with still more C-code). The only problem I've
ever had with it is in a call/cc-heavy program (which was slow, and
has GC problems on certain platforms), but that's a result of
its decision to be C-friendly (and the Boehm collector).

The beauty was that I was able to do a nearly mindless port to other
Schemes from Bigloo. It is *very* standards compliant, including
SRFI-0 et al.

david rush
--
A Bigloo fan since the last century...

Joe Marshall

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
The Almighty Root <ja...@fredbox.com> writes:

> I've realized that the Unix design, although a
> sad excuse for a real operating system, does happen to provide an
> excellent skeleton infrastructure for higher-level operating system (or
> better, `operating environment') implementation.
>

> [rant elided]


>
> I know that it's very impolite to criticize a design without offering
> some sort of alternative.

Traditional Unix-haters didn't feel a need to offer an alternative for
a number of reasons:

1. It's not a `design' when it is patently obvious that no
forethought went into it.

2. Any reasonable person with a 6th grade education could do better,
so several obvious alternatives have probably already been dreamed
up.

3. Implying that improvements would be considered or adopted offends
the jaded demeanor of other Unix haters.

4. The catharsis comes from the vitriol.

That being said....

> So can anyone offer suggestions as to Scheme choices? Experience with
> the various FFIs and compilers? Ideas and concerns about implementing
> native compilers for the various Schemes? I won't kid myself about the
> largeness of this undertaking and how much work it will take to
> implement...

MIT Scheme is a hairball, but it has a *great* compiler. Very few
people (1?) are currently working on developing it.

MzScheme is under active development, and it has been compiled to a
standalone kernel using the Flux OS Toolkit.

--
~jrm


Tim Moore

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
On 25 Apr 2000, The Almighty Root wrote:

> The most irritating part of Unix IMHO is not the design of the kernel
> (yeah, yeah it's a monolithic spaghetti ball) or the functionality of
> system calls (yeah, yeah, no PCLSRing) or the unrecoverability of kernel
> panics, or whatever else is associated with the kernel and driver
> implementations. What's *really* irritating about the Unix design is

Ya know, I've seen other references to "PCLusering" in Lisp groups
recently, I think I know what it means, and I've read Gabriel's parable.
Given that BSD has had restartable system calls for the last 17
years, could someone explain to me what "PCluser" problems still exist in
modern Unix?

Tim

Olin Shivers

unread,
Apr 26, 2000, 3:00:00 AM4/26/00
to
Some comments:

Basically what I'm thinking of doing, at least to get some quick
headway, is to utilize the hardware support that both X and the Linux
kernel give. Both of these software systems provide a vast amount of
support for different hardware devices and platforms and I don't want to
remake the wheel when it comes to implementing the low-level device and
architecture support for an operating system.

OSKit gets you up and running on the bare metal pretty painlessly. It's
been used to get Scheme, Java & ML systems up on raw machines.

Scsh looks suitable as well, but I've never heard of a native compiler for
S48 and I'm somewhat wary of implementing one for it.

Scsh is not primarily an implementation. It is a design and a huge pile of
source, both of which are available, for free, on the Net. You can take
it all and repurpose it to any end you like. Just the design work it
represents is not insignificant.

A full compiler for S48 would be some work, but it would be quite easy to do a
byte-code->x86 translator. That would get you a huge performance improvement.
Performance is just not the issue, though. If you build something -- anything
-- real and it's useful, people will figure out ways to make it faster.

The big outstanding issues with scsh (Scheme48) are how to get dynamic
module loading/linking, and separate byte compilation of source modules.
I've been waiting 9 years for these things. Adding them to S48 would have
a big impact on the useability of the system, in terms of startup time
and memory footprint.

Runlevels are a supreme example of what I'm talking about. What the
hell is a runlevel, really? If I check the manual page for init(8)
("man init" -- how obvious is that?) I read:

Now all that init really does, it seems, (at least sysvinit, the init


package used on my Linux box, but most other init packages for Unices
are similar) is spawn some gettys and a few programs to handle signals
(like shutting down on C-M-Delete), and run a big nasty wad of shell
scripts (the heinous, unmaintainable crap in the /etc/rc.d directory).

If you don't like that stuff, you can replace it with something written in any
good Unix-based Scheme *without* getting into the mess of doing your own
OS. The init process can be anything you want it to be; its architecture is
not baked into the Unix kernel design. Its job is one very well suited to
Scheme. As are things like inetd and sendmail -- a Scheme-based mail system
would be a fine thing.

I have, over time, moved a lot of the /etc scripts on my notebook over to scsh
-- ppp dialup, pcmcia, config bits, backup dumps, etc. It's *very* pleasant to
do this kind of stuff in scsh.

If you think I'm crazy then go ahead and say it,

You are crazy, but that's not important. The only thing that matters is
whether or not you do anything. Do *anything*, and you matter.

Just for fun, I append a typical system script I use that's written in Scheme.
It does backups over the net; I use it almost every day.
-Olin

-------------------------------------------------------------------------------
#!/usr/local/bin/scsh \
-o let-opt -e main -s
!#

;;; Dump a file system on my notebook computer out to a backed-up
;;; disk on a sessile system. The bits are compressed, encrypted,
;;; and copied over the net using ssh to a file named
;;; $name$level.gz.2f.
;;; If you say
;;; netdump 0 / root
;;; then you do a level 0 dump of the / file system to a file named root0.dgz.bf
;;; in the fixed directory /home/c3/shivers/mk-backup/stable/.
;;;
;;; We play some games with ssh and su, because this script has to be run
;;; by root in order to have total access to the file system being dumped,
;;; but you must be someone less threatening (me) so that the remote machine
;;; will allow you to ssh over and write the bits.
;;; -Olin


(define tdir "/opt/backups/spool/shivers") ; The target directory
(define me "shivers")
(define rhost "tin-hau.ai.mit.edu")

(define (useage)
(format (error-output-port)
"Usage: netdump level dir name\nFiles backed up to ~a on ~a.\n"
tdir rhost)
(exit -1))


;;; These guys are useful for root scripts.

(define-syntax exec/su ; (exec/su uname . epf)
(syntax-rules () ; Su to UNAME, then exec EPF.
((exec/su user . epf)
(begin (set-uid (->uid user))
(exec-epf . epf)))))

(define-syntax run/su ; (run/su uname . epf)
(syntax-rules () ; Run command EPF as user UNAME
((run/su user . epf)
(wait (fork (lambda () (exec/su user . epf)))))))

(define (main args)
(if (= 4 (length args))
(let* ((level (cadr args))
(dir (caddr args))
(name (cadddr args))

(fmt (lambda args (apply format #f args))) ; abbreviation
(newfile (fmt "~a/new/~a~a.tgz.2f" tdir name level))
(stablefile (fmt "~a/stable/~a~a.tgz.2f" tdir name level)))

(format (error-output-port) "Starting level ~a dump of ~a to ~a.\n"
level dir newfile)

;; The exit status of a pipeline is the exit status of the last element
;; in the pipeline -- so (| (dump) (copy-to-remote-machine)) won't
;; tell us if the dump succeeded. So we do it the hard way -- we
;; explicitly fork off the dump and copy procs, pipe them together
;; by hand, and check them both.

;; Fork off the dump process: dump uf<level> - <dir>
(receive (from-dump dump-proc)
(run/port+proc (dump ,(fmt "uf~a" level) - ,dir))

;; Fork off the compress/encrypt/remote-copy process,
;; sucking bits from dump's stdout.
(let ((copy-proc (fork (lambda ()
(exec/su me
(| (gzip)
(mcrypt)
(ssh ,rhost
; "dd of=/dev/tape"
,(fmt "cat > ~a" newfile)))
(= 0 ,from-dump))))))

(close from-dump)
(cond ((and (zero? (wait dump-proc)) ; Wait for them both
(zero? (wait copy-proc))) ; to finish.

;; The dump&net-copy won; move the file to the stable dir.
(run/su me (ssh ,rhost mv ,newfile ,stablefile))
(format (error-output-port) "Done.\n"))

(else (format (error-output-port) ; Oops.
"Had a problem dumping ~a = ~a!\n" dir name)))))
(exit))
(useage)))

felix

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to

The Almighty Root wrote in message ...

>
>So can anyone offer suggestions as to Scheme choices? Experience with
>the various FFIs and compilers? Ideas and concerns about implementing
>native compilers for the various Schemes? I won't kid myself about the
>largeness of this undertaking and how much work it will take to
>implement...
>


You won't find any consensus on a Scheme implementation. This is a
religious question. It's like asking for the preferred Editor, or Language,
or Native-/C-backend, or CPS-/Direct-style, or,or,or... :-)


How about Bigloo or Gambit? Since they generate C it should be possible
to start smoothly with replacing OS functionality. Not to speak of the
portability
gains.


BTW, How many has this idea (LISP/Scheme OS/OE) before? And failed? :-)


felix
(who probably doesn't know what you're talking about)


Christopher Browne

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
Centuries ago, Nostradamus foresaw a time when Olin Shivers would say:

>Some comments:
>>Basically what I'm thinking of doing, at least to get some quick
>>headway, is to utilize the hardware support that both X and the Linux
>>kernel give. Both of these software systems provide a vast amount of
>>support for different hardware devices and platforms and I don't want to
>>remake the wheel when it comes to implementing the low-level device and
>>architecture support for an operating system.
>
>OSKit gets you up and running on the bare metal pretty painlessly. It's
>been used to get Scheme, Java & ML systems up on raw machines.

The research material that has come out of that group has been rather
neat; the substrate that pulls in Linux and FreeBSD drivers via
something resembling COM is the _slickest_ idea of modern days for
gaining some advantage from the development of device drivers for
Linux and FreeBSD.

I am, however, a bit skeptical that this approach is of _massive_
benefit.

The problem that virtually all attempts at "LispOS" implementations
have fallen prey to is that of getting caught up in having to support
all sorts of bizarre sorts of hardware.

A whole bunch <http://www.hex.net/~cbbrowne/lisposes.html> have come
and gone.

The OSKit approach looks like the one that most plausibly offers a
route to get _some_ benefit from the _massive_ efforts going into
hardware support on Linux and *BSD; it is, nonetheless, only providing
the hardware support that was available in early 1997. (Linux 2.0.29)
Further, OSKit is not portable to more than IA-32 systems. More is
predicted, but I rather think that it has been predicted for several
years now, to no avail.

The approach that seems _rather_ more "production-worthy," at this
point, is that of building a "Lisp System" by layering a Lisp-based
set of user space tools atop a kernel coming from Linux or one of the
BSDs.

>>Runlevels are a supreme example of what I'm talking about. What the
>>hell is a runlevel, really? If I check the manual page for init(8)
>>("man init" -- how obvious is that?) I read:
>
>>Now all that init really does, it seems, (at least sysvinit, the init
>>package used on my Linux box, but most other init packages for Unices
>>are similar) is spawn some gettys and a few programs to handle signals
>>(like shutting down on C-M-Delete), and run a big nasty wad of shell
>>scripts (the heinous, unmaintainable crap in the /etc/rc.d directory).
>
>If you don't like that stuff, you can replace it with something written in any
>good Unix-based Scheme *without* getting into the mess of doing your own
>OS. The init process can be anything you want it to be; its architecture is
>not baked into the Unix kernel design. Its job is one very well suited to
>Scheme. As are things like inetd and sendmail -- a Scheme-based mail system
>would be a fine thing.

Yes, indeed.

<ftp://linux01.gwdg.de/pub/cLIeNUX/interim/> is the home of cLIeNUX, a
Linux that is essentially "Forth-based."

Notable properties:

- It uses a very different filename hierarchy that is very
non-UNIX-like:
<ftp://linux01.gwdg.de/pub/cLIeNUX/descriptive/DSFH.html>
"cLIeNUX now implements what I call the DSFH, the Dotted Standard
Filename Hierarchy. I had some nice docs on this that got vaporized
in a reboot accident. This happens when doing a distro. For now,
look at the DSFHed script, and the symlinks in / . DSFH makes the
standard unix filenames invisible, and modifies them. They are
still there though, in modified form. Stuff that looks for
e.g. /bin automatically can be converted to look for /.bi
automatically, automatically. And the user gets sensible names to
look at in her native language. Sorry if that sounds crazy."

- It is based on LIBC5, and uses C and FORTH as the base programming
languages.

It _appears_ that it uses a customized init; that is the _really
crucial_ thing that would change in creating a "LispOS" atop the Linux
kernel. I'm not sure if cLIeNUX init is written in FORTH; that would
be a pretty appropriate thing, although I somehow suspect that it is
not.

Everything else, on Linux, is invoked via init, whether directly or
indirectly, so that if you change init, that provides a substantially
different character to the system.

The other notable Linux that has a Rather Unique Init is
<http://www.pell.chi.il.us/~orc/Mastodon/> David Parsons' "Mastodon."

The point, if it's not clear, is that there is fairly ample opportunity
to customize a system _based on Linux_ into whatever form you like.
cLIeNUX is an example of how a "Forth person" built something that
runs a C-and-Forth "userspace."

Creating a "userspace" in your favorite image may be adequate to
provide the environment desired, and if that be so, that is likely to
be a stabler choice than most of the alternatives, as the ongoing
development of Linux-as-kernel provides a platform that can improve
without necessarily forcing you to rewrite great gobs of it each time
Intel comes out with a new CPU.

>I have, over time, moved a lot of the /etc scripts on my notebook
>over to scsh -- ppp dialup, pcmcia, config bits, backup dumps,
>etc. It's *very* pleasant to do this kind of stuff in scsh.

This is an area in which it is quite unfortunate that there _hasn't_
been any improvement on UNIX; there have traditionally been two
approachs:
a) The BSD way, where you have a script that starts up desired
services, and
b) The SysV way, where there is a boatload of scripts that start up
individual services, and then symbolic links to turn this into a
list that can be executed.

People of both 'religion' take potshots at the other, so that the only
choice is between the Right Way, which is the init that _I_ use, and
the Wrong Way, which is the init that _you_ use.

Virtually no examination of the question, "Is there perhaps a better
way?"

David Parsons' approach seems more like using a UNIX "Makefile" to
establish a set of service dependancies that need to be satisfied.

There are some deadlock conditions to worry about, but it would be a
Truly Good Thing to try to come up with a better way of managing this
stuff.

Note that the Software Carpentry
<http://software-carpentry.codesourcery.com/> project is seeking to
build tools to supercede autoconf, make, Expect, and Bugzilla.
They've got some funding, and a goodly hundred or so candidate
utilities proposed in the various tool classifications.

Something good may come out of _that_.

>>If you think I'm crazy then go ahead and say it,
>
>You are crazy, but that's not important. The only thing that matters
>is whether or not you do anything. Do *anything*, and you matter.

Indeed.

Take a Linux kernel, write an "init" that uses Scheme-based startup
scripts, and build up a "takes-a-few-floppies" distribution that
parallels cLIeNUX in having a user-space that is largely coded in
Scheme, and this can become an interesting project.

Unfortunately, there is a dearth of Schemes that compile directly to
machine code; that is rather more common with Forth. It might be more
"natural" to implement this using CMU Common Lisp instead. But the
exploration of how to implement this would doubtless provide
interesting insights and learning...
--
A student, in hopes of understanding the Lambda-nature, came to
Greenblatt. As they spoke a Multics system hacker walked by. "Is it
true", asked the student, "that PL-1 has many of the same data types
as Lisp?" Almost before the student had finished his question,
Greenblatt shouted, "FOO!", and hit the student with a stick.
cbbr...@ntlug.org- <http://www.hex.net/~cbbrowne/lsf.html>

James A. Crippen

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
David Rush <ku...@bellsouth.net> writes:

> > Oh, for a free Scheme compiler that produces independently linkable
> > and executable native object code... (And implements all of R5RS!)
>
> It's called Bigloo. I mentioned this to you back in the days of the
> Schema mailing list, but nobody seemed interested.

That's because I was still trying to implement my own VM for a Scheme
system. Which was really just spending time in the wrong place, since
I'd never catch up with what's been done already.

> Bigloo is a
> Scheme->C with a very C-friendly FFI (as in you don't need to wrap
> existing C libraries with still more C-code). The only problem I've
> ever had with it is in a call/cc-heavy program (which was slow, and
> has GC problems on certain platforms), but that's a result of
> its decision to be C-friendly (and the Boehm collector).

I've never understood why compiling call/cc should be so much problem.
I'm under the impression (from not too thoroughly reading various PhD
theses (like Amr Sabry's)) that in theory an arbitrary Scheme program
making use of call/cc can be CPS transformed into a program with only
explicit continuations. From there some partial evaluation and
optimization can be done, if wished, and then the program can be
transformed into machine code (or C, which is close enough). In that
case, since call/cc is transformed into various shapes of explicit
continuations, it shouldn't serve to be a problem. Am I confused
about this? Or is theory not directly applicable to practice in this
case? Or is it just that many implementations have been made by
people unfamiliar with the CPS transformation and competitors (I don't
buy that *at all*)?



> The beauty was that I was able to do a nearly mindless port to other
> Schemes from Bigloo. It is *very* standards compliant, including
> SRFI-0 et al.

Bigloo sounds very promising. I'll dl a copy and take a good look,
and compare with Scsh (which is the top of my list right now).

> A Bigloo fan since the last century...

Since 1900 or earlier?? Sorry... But I had to dig at that since I've
been kidding everyone else about this Brand New Century stuff. There
was no year 0, so you start from one. Thus the year 10 is still in
the first decade, and the year 2000 is the last year of the 20th
century. The easiest way to remember all of this is that the century
you're in is named after the last year in it -- 19th century was 1801
to 1900, and 20th century is from 1901 to 2000. It's an off-by-one
error, though not very obvious to people used to counting from zero.

'james

James A. Crippen

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
Joe Marshall <jmar...@alum.mit.edu> writes:

> Traditional Unix-haters didn't feel a need to offer an alternative for
> a number of reasons:
>
> 1. It's not a `design' when it is patently obvious that no
> forethought went into it.
>
> 2. Any reasonable person with a 6th grade education could do better,
> so several obvious alternatives have probably already been dreamed
> up.
>
> 3. Implying that improvements would be considered or adopted offends
> the jaded demeanor of other Unix haters.
>
> 4. The catharsis comes from the vitriol.

Hahaha! It's nice to see that someone can reason about Unix-haters...
So many of them are secret Unix bigots in the first place. Although
admittedly some of them come from more illustrious backgrounds, like
the Lisp Machines, or TENEX and TWENEX, or the like.

> That being said....


>
> > So can anyone offer suggestions as to Scheme choices? Experience with
> > the various FFIs and compilers? Ideas and concerns about implementing
> > native compilers for the various Schemes? I won't kid myself about the
> > largeness of this undertaking and how much work it will take to
> > implement...
>

> MIT Scheme is a hairball, but it has a *great* compiler. Very few
> people (1?) are currently working on developing it.

I had thought that the compiler didn't actually generate independently
executable code, but code only loadable into the interpreter. In that
case the interpreter would have to be loaded and running for anything
else to happen, which would slow the boot process down quite a bit on
slower machines (like mine).

And the fact that only Chris Hanson (sp?) is apparently maintaining
it, and that no new releases have come out for a *long* time, and that
it isn't R5RS compliant (with the requisite implementation of the
macro system, I feel there are too many weights against it.



> MzScheme is under active development, and it has been compiled to a
> standalone kernel using the Flux OS Toolkit.

I had thought about this before, but it does away with the hardware
support that I can get from the Linux/X combination. The Flux toolkit
would allow me to make modules out of all the code that I might use
for hardware support, process scheduling, etc, but then I'd have to
keep watch on gritty parts of the Linux kernel and the X system for
what I'd need to update. That would defeat much of the purpose of
this in the first place.

In the end I'd really like to have something akin to a Linux
distribution, but with Scheme-based programs replacing much of the OS.
Given that many tools for developing such distributions are already
available, I feel that this is a goal with some near-future promise.

'james

James A. Crippen

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
Tim Moore <mo...@herschel.bricoworks.com> writes:

> On 25 Apr 2000, The Almighty Root wrote:
>

> > The most irritating part of Unix IMHO is not the design of the kernel
> > (yeah, yeah it's a monolithic spaghetti ball) or the functionality of
> > system calls (yeah, yeah, no PCLSRing) or the unrecoverability of kernel
> > panics, or whatever else is associated with the kernel and driver
> > implementations. What's *really* irritating about the Unix design is
>

> Ya know, I've seen other references to "PCLusering" in Lisp groups
> recently, I think I know what it means, and I've read Gabriel's parable.
> Given that BSD has had restartable system calls for the last 17
> years, could someone explain to me what "PCluser" problems still exist in
> modern Unix?

I'm not sure about the BSD implementation, but in ITS ISTR any system
call could not only be restarted, but totally backed out of such that
the system call seemed to never actually have happened. The feeling
of the ITS hackers is that if this was already done once there's no
reason for anyone not to implement it again, since the brain work of
inventing it has already been done. Nevermind the fact that all the
ITS source was written in an incompatible (sorta) version of the
PDP-10 assembly language (which had many features of the higher level
languages of the time, in fact), and that the PDP-10 instruction set
had certain aspects that were hard to duplicate on other platforms.
And that much of the code to ITS is impossible to read without
commentary from the original authors.

There's a paper about PCLSRing written by Alan Bawden that I can't
seem to recall. But if you search for his name and the string "PCLSR"
you'll probably hit paydirt. Or you could stop by alt.sys.pdp10,
which has been very active recently, and is filled with crufty hackers
discussing various crufty aspects of the -10 series computers.

If I'm wrong about what I said I apologize in advance. It's been well
over a year and a half since I read that paper, and I've never worked
on an ITS system. Just read about them and appreciated their
grandeur. And browsed some source and docs.

'james

Joseph Dale

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
"James A. Crippen" wrote:
>
>
> There's a paper about PCLSRing written by Alan Bawden that I can't
> seem to recall. But if you search for his name and the string "PCLSR"
> you'll probably hit paydirt.

This must be the one you're thinking of:
http://www.inwap.com/pdp10/pclsr.txt

James A. Crippen

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
cbbr...@news.hex.net (Christopher Browne) writes:

> Centuries ago, Nostradamus foresaw a time when Olin Shivers would say:

> >OSKit gets you up and running on the bare metal pretty painlessly. It's
> >been used to get Scheme, Java & ML systems up on raw machines.
>
> The research material that has come out of that group has been rather
> neat; the substrate that pulls in Linux and FreeBSD drivers via
> something resembling COM is the _slickest_ idea of modern days for
> gaining some advantage from the development of device drivers for
> Linux and FreeBSD.
>
> I am, however, a bit skeptical that this approach is of _massive_
> benefit.
>
> The problem that virtually all attempts at "LispOS" implementations
> have fallen prey to is that of getting caught up in having to support
> all sorts of bizarre sorts of hardware.

This is exactly what I already came to terms with. I've never liked
writing drivers, or any other code that operates at a similar level.
Even serial communications programs bug me. I don't like to think of
bit-shifting and masking unless I have to. It takes too many cycles
best spent on other things. Writing something which is essentially
*only* that is right out, in my opinion.



> The OSKit approach looks like the one that most plausibly offers a
> route to get _some_ benefit from the _massive_ efforts going into
> hardware support on Linux and *BSD; it is, nonetheless, only providing
> the hardware support that was available in early 1997. (Linux 2.0.29)
> Further, OSKit is not portable to more than IA-32 systems. More is
> predicted, but I rather think that it has been predicted for several
> years now, to no avail.

Also note that keeping the OSKit up to date requires extensive
knowledge of both *BSD and Linux and following their respective
development processes intently. Understanding the changes being made
to the entire kernel structure of both systems, in parallel, is a very
difficult undertaking. Managing to unglue these parts and meld them
into the OSKit is equally nontrivial. Doing this alone, even just
once to bring things up to date before you develop your OS, is
unreasonable at best.



> The approach that seems _rather_ more "production-worthy," at this
> point, is that of building a "Lisp System" by layering a Lisp-based
> set of user space tools atop a kernel coming from Linux or one of the
> BSDs.

Which is what I mentioned earlier. I figure that replacing the user
space of a Linux system in an incremental fashion will succeed where
all other attempts have failed. Linux is already the most-ported OS
in history. Unreasonable amounts of support for all sorts of generic,
crufty, and crappy hardware is already available, and the list is
growing longer as I type. If a Scheme-based user space was
implemented atop this then the only thing that would need porting
would be the compiler and interpreter, and total portability would be
achieved, modulo programs depending on hardware support, which I think
would be few.



> >If you don't like that stuff, you can replace it with something written in any
> >good Unix-based Scheme *without* getting into the mess of doing your own
> >OS. The init process can be anything you want it to be; its architecture is
> >not baked into the Unix kernel design. Its job is one very well suited to
> >Scheme. As are things like inetd and sendmail -- a Scheme-based mail system
> >would be a fine thing.

Agree. An MTA is one of the things I'd like to tackle after replacing
init and friends.



> Yes, indeed.
>
> <ftp://linux01.gwdg.de/pub/cLIeNUX/interim/> is the home of cLIeNUX, a
> Linux that is essentially "Forth-based."
>
> Notable properties:
>
> - It uses a very different filename hierarchy that is very
> non-UNIX-like:
> <ftp://linux01.gwdg.de/pub/cLIeNUX/descriptive/DSFH.html>
> "cLIeNUX now implements what I call the DSFH, the Dotted Standard
> Filename Hierarchy. I had some nice docs on this that got vaporized
> in a reboot accident. This happens when doing a distro. For now,
> look at the DSFHed script, and the symlinks in / . DSFH makes the
> standard unix filenames invisible, and modifies them. They are
> still there though, in modified form. Stuff that looks for
> e.g. /bin automatically can be converted to look for /.bi
> automatically, automatically. And the user gets sensible names to
> look at in her native language. Sorry if that sounds crazy."

Why is it that when anyone proposes a replacement or redesign for Unix
Brain Damage they always feel as though they have to apologize for
their craziness? I already apologized myself. It's almost
instinctive that hordes of Unix weenies are going to pour out of the
hills with their little furry hats on waving curved swords and silk
banners, screaming in arcane Mongolian tongues like Tcsh, Awk, and
Perl, lusting after the thoughtful person's blood and his female
family members.

I personally don't like changing the structure of the root directory
too much, but I would like to see the /usr and /usr/local hierarchies
folded in with root. There are reasons for the separation, but as
machines become more and more single-user oriented these distinctions
become lost. And the introduction of /../sbin directories has
provided a much more effective separation of binaries than /bin and
/usr/bin ever did.

I really differ with native language directories. There's no reason
to change them since they really aren't in anyone's human language.
Until I used Unix I'd never be able to identify what /sbin meant. Nor
would /var/adm mean anything to me. While these threeletterisms are
supposedly mnemonic there really isn't any solid meaning attached to
them. I just think of /var as where the logs and assorted program
state files go. /etc is where the config files go, except for some
which want to be in /var somewhere. /usr is the big filesystem with
most of everything on it. To me it has nothing to do with a user, it
just happens to be pronounced that way. /gbr would have as much
meaning. (Dutch `gebruiker'.)



> - It is based on LIBC5, and uses C and FORTH as the base programming

^^^^^ not so good -- this means binary
incompatibility with the newer Linux systems who
use glibc-2 aka libc-6.
> languages.

(It's nice to see FORTH now and then, though. It's been undercover
with minimal press since the days of the 8 bit micros...)



> It _appears_ that it uses a customized init; that is the _really
> crucial_ thing that would change in creating a "LispOS" atop the Linux
> kernel.

I agree, which is why I decided that it should be the first thing to
be redesigned. The fact that a Unix kernel still lurks underneath
still shapes the design to some extent (like the extensive use of
cheap process spawning with fork/exec), but the end result should be
plenty foreign to the Unix state of mind. The design of the new init
will set the style for the rest of the system. If it's too klugy then
the rest of the system will seem vaguely patched together as well. If
it's rock solid, indestructable, and has the elegance and flair of a
Japanese castle with the same toughness, then the system will be a big
win.

> I'm not sure if cLIeNUX init is written in FORTH; that would
> be a pretty appropriate thing, although I somehow suspect that it is
> not.
>
> Everything else, on Linux, is invoked via init, whether directly or
> indirectly, so that if you change init, that provides a substantially
> different character to the system.

Yes, what I said. I should read ahead more instead of shooting from
the hip.



> The other notable Linux that has a Rather Unique Init is
> <http://www.pell.chi.il.us/~orc/Mastodon/> David Parsons' "Mastodon."
>
> The point, if it's not clear, is that there is fairly ample opportunity
> to customize a system _based on Linux_ into whatever form you like.
> cLIeNUX is an example of how a "Forth person" built something that
> runs a C-and-Forth "userspace."

Hear, hear.



> Creating a "userspace" in your favorite image may be adequate to
> provide the environment desired, and if that be so, that is likely to
> be a stabler choice than most of the alternatives, as the ongoing
> development of Linux-as-kernel provides a platform that can improve
> without necessarily forcing you to rewrite great gobs of it each time
> Intel comes out with a new CPU.

The primary bitch that I heard from various Lispophiles about not
using a Lisp-based kernel was that it would be inconvenient to patch
the running kernel with new code. And it wouldn't be as easy to
modify the existing code for the kernel. In short, it wouldn't be
like the Lisp Machines.

I have to say that both arguments are bogus. There's no reason for
anyone to even *care* what's going on in the kernel anymore. There
are people who have specialized their entire career to nothing but
kernel hacking. Even the competent Unix user doesn't even understand
what's actually happening in a kernel aside from generalities, and a
Scheme-based kernel would be just as difficult to understand (modulo
readability ;), and would be more inefficient than the C-based
monsters that are out there, simply because Scheme wouldn't provide
the near 1-1 assembly to language mapping that C does, which seems
essential to hardware control. (The Lisp Machines got off easy since
they only had one small set of hardware to work with, and the drivers
for said hardware could be nearly perfected. The hardware was also
totally known, something that doesn't always exist on modern
machines.)

If you care about the kernel and its design so much, then write one.
I'd be happy to use it, if it supported my hardware. If it doesn't,
then keep writing. Otherwise, I'll be happy with my Scheme
environment, thank you.

> >I have, over time, moved a lot of the /etc scripts on my notebook
> >over to scsh -- ppp dialup, pcmcia, config bits, backup dumps,
> >etc. It's *very* pleasant to do this kind of stuff in scsh.
>
> This is an area in which it is quite unfortunate that there _hasn't_
> been any improvement on UNIX; there have traditionally been two
> approachs:
> a) The BSD way, where you have a script that starts up desired
> services, and
> b) The SysV way, where there is a boatload of scripts that start up
> individual services, and then symbolic links to turn this into a
> list that can be executed.
>
> People of both 'religion' take potshots at the other, so that the only
> choice is between the Right Way, which is the init that _I_ use, and
> the Wrong Way, which is the init that _you_ use.
>
> Virtually no examination of the question, "Is there perhaps a better
> way?"

Most people have been either too busy to care (until they have to hack
init scripts) or too afraid of retaliation from *both* camps united
against them to broach the subject, IMO.



> David Parsons' approach seems more like using a UNIX "Makefile" to
> establish a set of service dependancies that need to be satisfied.
>
> There are some deadlock conditions to worry about, but it would be a
> Truly Good Thing to try to come up with a better way of managing this
> stuff.

I don't suggest trying to automatically establish dependencies between
services. It just seems smarter to have the administrator write such
things themselves, since they can comprehend manual pages better than
the computer can. No tackling AI-complete problems for me...
Providing a simple mechanism for implementing the dependency plan
seems much easier, and there's less probability of klugification of
the design.



> Note that the Software Carpentry
> <http://software-carpentry.codesourcery.com/> project is seeking to
> build tools to supercede autoconf, make, Expect, and Bugzilla.
> They've got some funding, and a goodly hundred or so candidate
> utilities proposed in the various tool classifications.
>
> Something good may come out of _that_.
>
> >>If you think I'm crazy then go ahead and say it,
> >
> >You are crazy, but that's not important. The only thing that matters
> >is whether or not you do anything. Do *anything*, and you matter.
>
> Indeed.

Thanks for the votes of confidence. :)



> Take a Linux kernel, write an "init" that uses Scheme-based startup
> scripts, and build up a "takes-a-few-floppies" distribution that
> parallels cLIeNUX in having a user-space that is largely coded in
> Scheme, and this can become an interesting project.
>
> Unfortunately, there is a dearth of Schemes that compile directly to
> machine code; that is rather more common with Forth.

My current problem. I'm afraid of having to write a compiler that
generates *independently* linkable and loadable objects before I have
anything to work with. I've not written something of this complexity
before, and I'm worried that it will never get to a point of
usability. I'll get mired in the Turing Tarpit as it were, and not
able to move on to the real goal.

By `independently' I mean a native binary object that can be used just
like the typical .o file generated by a C compiler. A raw binary
object that can be linked to libraries and executed independent of any
existing Scheme implementation. This way Scheme doesn't have to be
running before anything happens, and we escape the situation that the
Lisp Machine OS (and its descendants) was in, that Lisp had to be
started first before anything else could happen, and that namespace
pollution was almost inevitable, even with a powerful module system.

> It might be more
> "natural" to implement this using CMU Common Lisp instead. But the

I really don't want to have to do this since I'd be writing a
Lisp-based OS and not a Scheme-based OS. I'll use MIT Scheme before I
start using Lisp. I'd even have to change the name then, and I *like*
`Schema' -- it even has a neat plural form! :)

> exploration of how to implement this would doubtless provide
> interesting insights and learning...

Oh yeah. Learning. That's all I'm doing right now. And all I'll
ever be doing...

I think the real goal I have is that nobody will have to do this
again. Linux looks like it's going to persist, so this Scheme-based
replacement will likely hang on too, if it gets anywhere. But nobody
can predict the future, not even Nicholas Negroponte.

> --
> A student, in hopes of understanding the Lambda-nature, came to
> Greenblatt. As they spoke a Multics system hacker walked by. "Is it
> true", asked the student, "that PL-1 has many of the same data types
> as Lisp?" Almost before the student had finished his question,
> Greenblatt shouted, "FOO!", and hit the student with a stick.

Replace PL/1 with C. Much more current, that. Same damned problem.
"FOO!" *smack*

Does anyone know the actual event behind this koan?

'james

James A. Crippen

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
Joseph Dale <jd...@uclink4.berkeley.edu> writes:

That's it precisely. I'll grab it right now for my files.

'james

Guillermo 'Bill' J. Rozas

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
ja...@fredbox.com (James A. Crippen) writes:

> I had thought that the compiler didn't actually generate independently
> executable code, but code only loadable into the interpreter. In that
> case the interpreter would have to be loaded and running for anything
> else to happen, which would slow the boot process down quite a bit on
> slower machines (like mine).

A minor correction. The interpreter is not needed (although it is
always there). It wouldn't be hard to splice it out.

The MIT Scheme runtime system is needed. This is composed of both a
library written in C (pretty minimal but includes GC and the guts of
call-with-current-continuation) and a large library written in Scheme
and compiled.

You don't need any interpreted code or interpreter -- in fact, when
you start MIT Scheme, there isn't any interpreted code.

The compiler doesn't produce independently-executable code, but at a
similar level neither does your C compiler -- you need anything from
crt0.o to the C library (including stdio, stdlib, etc.) in Unix, and
similarly in Windows (that's what most DLLs are about).

What MIT Scheme doesn't have is a linking loader separate from the
interactive one -- again, totally orthogonal from interpretation.

Guillermo 'Bill' J. Rozas

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
ja...@fredbox.com (James A. Crippen) writes:

> I've never understood why compiling call/cc should be so much problem.
> I'm under the impression (from not too thoroughly reading various PhD
> theses (like Amr Sabry's)) that in theory an arbitrary Scheme program
> making use of call/cc can be CPS transformed into a program with only
> explicit continuations. From there some partial evaluation and
> optimization can be done, if wished, and then the program can be
> transformed into machine code (or C, which is close enough). In that
> case, since call/cc is transformed into various shapes of explicit
> continuations, it shouldn't serve to be a problem. Am I confused
> about this? Or is theory not directly applicable to practice in this
> case? Or is it just that many implementations have been made by
> people unfamiliar with the CPS transformation and competitors (I don't
> buy that *at all*)?

There is no _conceptual_ problem with call-with-current-continuation
(ignoring dynamic-wind, which adds some quirks).

There are plenty of pragmatic issues, however.

To a coarse approximation, there are two major ways to implement
Scheme (and ML, which at this level is indistinguishable)

1. CPS-based. This converts all programs to explicit continuation
passing style. Continuations then become simple closures that can
be handled identically to all others -- in particular, they can
easily be heap allocated. call-with-current-continuation is
conceptually trivial.

However, just because this is simple, it is not necessarily
desirable.

In particular, there are plenty of reasons why stack allocation is
preferable to heap allocation. If you go the full blown (true) CPS
way, then it becomes difficult to do stack allocation, and this can
cause performance problems for code that doesn't use
call-with-current-continuation (although it may make programs that
do use it relatively faster).

In theory, a full-blown extent and escape analysis on the resulting
CPS program should allow you to stack-allocate some of the
closures, perhaps even some that were not originally continuations.

In practice, I don't know of anyone who's done this (but I'm
somewhat out of touch), especially since extent and escape analysis
are almost always inconclusive in the presence of separate
compilation (different modules compiled in isolation).

Thus the only CPS-based systems that retain stack allocation (to my
knowledge) are those that use "pseudo-CPS". They use syntactic
CPS, but retain the distinction between those closures that arise
from CPS (and hence "stack-allocatable"), and those that arise in
other ways and will not be allocated on a stack (heap or none at
all).

In these "pseudo-CPS" systems, since continuations are
stack-allocated, call-with-current-continuation must be implemented
using one of the many techniques used in the direct systems:


2. Direct systems. These don't do CPS. They consider
call-with-current-continuation a library function to be implemented
in the runtime system, and otherwise compile Scheme in a way
similar to how most other languages are compiled -- except for true
tail recursion, which adds its own warts, requiring either lambda
lifting or some real cleverness.

These systems typically use a "call stack" (in Scheme it is not a
call stack, but a continuation stack, since tail calls don't grow
it).

call-with-current-continuation must then manage the stack by
copying as necessary. The details differ depending on the detailed
technique used. There are many possibilities (and I'm sure I'm
missing some):

- stop and copy in/out from a global area
- incremental copy in/out
- using stack-lets and copying stack-lets only when explicit
continuations are invoked.


Note that in either case call-with-current-continuation is not a
problem.

In the true CPS systems, it becomes just closure creation, something
which the compiler must know how to do.

In the pseudo-CPS or direct systems it becomes a problem for the
runtime system, and not for the compiler -- to the compiler it is just
like any other external call (e.g. length, apply, or
open-window-with-scrollbars-and-fuzzy-corners).

There are additional complications if your target is not machine code
(or assembly language -- same difference) and is a high-level language
that does not provide true and complete tail recursion (e.g. C,
although it is hard to call it a high-level language).

- True CPS systems rely extremely heavily on proper tail calls, since
even "returning" involves a tail call. Thus some of the
approximations to true tail recursion that some implementations have
done (e.g. Scheme->C) are not feasible in a true CPS system.
Getting true tail calls out of C (or Pascal for that matter) is
painful. There are several long-standing tricks such as driver
loops [*] and some new ones to provide true tail recursion.

- For direct systems, you have to be able to reify the control stack
-- something that most other languages don't let you do. You again
end up with a painful task. This can be done by a handful of other
techniques (e.g. keeping a dual "data" stack, or resorting to
assembly/machine language for the core of
call-with-current-continuation).

[*] By driver loop I mean that the implementation never allows the
host call stack to get very deep. Every so often (the details of
when and how vary according to the implementation), instead of doing
a native call, the current procedure returns to an outer driver loop
with some state and arguments that cause it to call the next
procedure:

extern SCHEME_OBJECT proc, args[N];

void
driver_loop (void)
{
while (1)
{
scheme_funcall (proc, args[0], ... args[N - 1]);
/* And invoking proc eventually overwrites proc and args for
the next iteration.
*/
}
}

These is the same technique that the original Scheme implementation
used to get true tail recursion out of MacLisp (which doesn't
guarantee it).

Joe Marshall

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
ja...@fredbox.com (James A. Crippen) writes:

> > A student, in hopes of understanding the Lambda-nature, came to
> > Greenblatt. As they spoke a Multics system hacker walked by. "Is it
> > true", asked the student, "that PL-1 has many of the same data types
> > as Lisp?" Almost before the student had finished his question,
> > Greenblatt shouted, "FOO!", and hit the student with a stick.
>
> Replace PL/1 with C. Much more current, that. Same damned problem.
> "FOO!" *smack*
>
> Does anyone know the actual event behind this koan?

I believe that Danny Hillis wrote it. I wouldn't know if there was an
actual event upon which this is based, but with Greenblatt involved, I
wouldn't rule it out.

Joe Marshall

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
ja...@fredbox.com (James A. Crippen) writes:

> I've never understood why compiling call/cc should be so much problem.
> I'm under the impression (from not too thoroughly reading various PhD
> theses (like Amr Sabry's)) that in theory an arbitrary Scheme program
> making use of call/cc can be CPS transformed into a program with only
> explicit continuations. From there some partial evaluation and
> optimization can be done, if wished, and then the program can be
> transformed into machine code (or C, which is close enough). In that
> case, since call/cc is transformed into various shapes of explicit
> continuations, it shouldn't serve to be a problem. Am I confused
> about this? Or is theory not directly applicable to practice in this
> case? Or is it just that many implementations have been made by
> people unfamiliar with the CPS transformation and competitors (I don't
> buy that *at all*)?

You can do this, but it comes with a price: the CPS code might not
use continuations in a stack-like manner. You have two alternatives,

1) punt on using the stack and just heap allocate all your
continuation frames,

2) make your compiler `smart enough' to figure out when it can use the
stack.

Since cwcc is so rarely used in production code, it seems
reasonable to put the entire burden of using cwcc on the primitive
itself rather than in the compiler.


> Since 1900 or earlier?? Sorry... But I had to dig at that since I've
> been kidding everyone else about this Brand New Century stuff. There
> was no year 0, so you start from one. Thus the year 10 is still in
> the first decade, and the year 2000 is the last year of the 20th
> century. The easiest way to remember all of this is that the century
> you're in is named after the last year in it -- 19th century was 1801
> to 1900, and 20th century is from 1901 to 2000. It's an off-by-one
> error, though not very obvious to people used to counting from zero.

Yes, but if you were planning a big party for the end of the
millenium, you might find people less enthusiastic because they were
partying about 4 months ago.

Joe Marshall

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
g...@cobalt.transmeta.com (Guillermo 'Bill' J. Rozas) writes:

Hi, Bill.

> - True CPS systems rely extremely heavily on proper tail calls, since
> even "returning" involves a tail call. Thus some of the
> approximations to true tail recursion that some implementations have
> done (e.g. Scheme->C) are not feasible in a true CPS system.
> Getting true tail calls out of C (or Pascal for that matter) is
> painful. There are several long-standing tricks such as driver
> loops [*] and some new ones to provide true tail recursion.
>

> [*] By driver loop I mean that the implementation never allows the
> host call stack to get very deep. Every so often (the details of
> when and how vary according to the implementation), instead of doing
> a native call, the current procedure returns to an outer driver loop
> with some state and arguments that cause it to call the next
> procedure:
>
> extern SCHEME_OBJECT proc, args[N];
>
> void
> driver_loop (void)
> {
> while (1)
> {
> scheme_funcall (proc, args[0], ... args[N - 1]);
> /* And invoking proc eventually overwrites proc and args for
> the next iteration.
> */
> }
> }
>
> These is the same technique that the original Scheme implementation
> used to get true tail recursion out of MacLisp (which doesn't
> guarantee it).

Baker suggested a trick where you never pop the C stack but just let
it grow in one direction. When you fall off the end, you run the
garbage collector to evacuate the continuations off the stack and then
use LONGJMP to clear the stack. This gives you proper tail recursion
*and* first-class continuations in one whack, bypassing at least some
of the problems with using C.

--
~jrm

David Rush

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
ja...@fredbox.com (James A. Crippen) writes:
> cbbr...@news.hex.net (Christopher Browne) writes:

> which want to be in /var somewhere. /usr is the big filesystem with
> most of everything on it. To me it has nothing to do with a user, it
> just happens to be pronounced that way. /gbr would have as much
> meaning. (Dutch `gebruiker'.)

/gbr in English might conceivably be a contraction of /goober, which
is a *wonderfule* place to put the user-space code ;)

> Scheme-based kernel would be just as difficult to understand (modulo
> readability ;)

A big issue actually...

> and would be more inefficient than the C-based
> monsters that are out there, simply because Scheme wouldn't provide
> the near 1-1 assembly to language mapping that C does, which seems
> essential to hardware control.

I've just *got* to disagree with this. I haven't felt as close to the
silicon as I do in Scheme for *years*. Once you get it into your
head that function names are (equivalent to) labels and parameters are
(equivalent to) registers it gets *really* cool.

Now, I'm not saying that R5RS Scheme is a systems programming
language, but it's not very far removed from being one. The changes
I'd make would be:

1) replace the numeric tower with machine-integer types and
*nothing* else
2) add a way to directly access non-GC memory
3) global interrupt/exception handlers - handler taking the
current continuation as one parameter, other params need
more thought

more radical (or expensive) ideas include:

4) pitch ports as a standard datatype
5) replacing symbols w/Scheme48's enumerated
6) a (ML-ish) module system that admits categorical
composition of functionality

I *think* that such a system (changes 1-3) would be sufficient for a
pretty groovy and potentially fast systems programming language. And
C-compatibility? Don't need it. Let the C compiler eat cake...

<re: compilers for systems programming in Scheme>


> By `independently' I mean a native binary object that can be used just
> like the typical .o file generated by a C compiler.

If you're ok about linking w/the Scheme RTS you'll be OK. If not,
you're going to need a compiler that does hefty region analysis. Those
beasties aren't common anywhere yet, although I have the impression
that Jeff Siskind is trying to incorporate that into Stalin.

david rush
--
Thinking dangerous thoughts...

Scott Ribe

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to

The Almighty Root wrote:
>
> It's rather patently obvious that a vastly better design is possible
> than the current init cruft. Most any design for init would be better
> than such a mess. I'm not suggesting something even more inane, like
> the garbage that Windos NT has foisted off on the public, with its
> little window in an itty-bitty font that contains supposed `services'
> (Unix daemons) that you click on and press buttons that frequently don't
> do the same thing they advertise. That's just as bad if not worse than
> Unix init.

Well, Apple recently showed a glimpse of what they're working on for Mac
OS X:

- daemons/scripts/services are rewritten to take their init info from
files in the format of XML property lists, eliminating some of the mess
of the zillion different config file formats

- there is a simple graphical editor built in that presents the property
lists as outlines

- the services all have 2 new properties added to whatever else is in
their config files: DEPENDS and PROVIDES

- the system tracks the DEPENDS and PROVIDES properties and
automatically determines load order, eliminating the messy crap of
directories full of scripts with names numbered to force load order

Food For Thought?

Daniel S. Riley

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
ja...@fredbox.com (James A. Crippen) writes:
> Linux is already the most-ported OS in history.

In terms of number of platforms supported, NetBSD runs on more
platforms than Linux--about the only place Linux has an edge is
support for different i386 configurations. The NetBSD release cycle
also tends to be more stable than Linux's, which might make it a
better target for replacing userland.

--
Dan Riley d...@mail.lns.cornell.edu
Wilson Lab, Cornell University <URL:http://www.lns.cornell.edu/~dsr/>
"History teaches us that days like this are best spent in bed"

felix

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to

Joe Marshall wrote in message ...

>
>Baker suggested a trick where you never pop the C stack but just let
>it grow in one direction. When you fall off the end, you run the
>garbage collector to evacuate the continuations off the stack and then
>use LONGJMP to clear the stack. This gives you proper tail recursion
>*and* first-class continuations in one whack, bypassing at least some
>of the problems with using C.
>


It's not quite proper: the C stack still grows, so you keep allocating
memory (for the C stack-frame, which is build anyway) even if you
are in a tight loop that does not cons as such.

But the approach really is elegant. I'm writing on a compiler that uses
this strategy and it works fine. The compiler itself is only about 2600
lines of Scheme code and the performance of the generated executables
is reasonable (And call/cc intensive benchmarks really burn!).
Watch this space for further news.

felix


Christopher Browne

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
Centuries ago, Nostradamus foresaw a time when James A. Crippen would say:

>cbbr...@news.hex.net (Christopher Browne) writes:
>> Centuries ago, Nostradamus foresaw a time when Olin Shivers would say:
>> >OSKit gets you up and running on the bare metal pretty painlessly. It's
>> >been used to get Scheme, Java & ML systems up on raw machines.
>>
>> The research material that has come out of that group has been rather
>> neat; the substrate that pulls in Linux and FreeBSD drivers via
>> something resembling COM is the _slickest_ idea of modern days for
>> gaining some advantage from the development of device drivers for
>> Linux and FreeBSD.
>>
>> I am, however, a bit skeptical that this approach is of _massive_
>> benefit.
>>
>> The problem that virtually all attempts at "LispOS" implementations
>> have fallen prey to is that of getting caught up in having to support
>> all sorts of bizarre sorts of hardware.
>
>This is exactly what I already came to terms with. I've never liked
>writing drivers, or any other code that operates at a similar level.
>Even serial communications programs bug me. I don't like to think of
>bit-shifting and masking unless I have to. It takes too many cycles
>best spent on other things. Writing something which is essentially
>*only* that is right out, in my opinion.

If the goal is to build a Lisp _environment_, then the creation of
device drivers is largely a distraction, as, while device drivers may
be _necessary_ to have a functioning system, their development biases
towards the "environment" side, and away from the "Lisp" side.

>> The OSKit approach looks like the one that most plausibly offers a
>> route to get _some_ benefit from the _massive_ efforts going into
>> hardware support on Linux and *BSD; it is, nonetheless, only providing
>> the hardware support that was available in early 1997. (Linux 2.0.29)
>> Further, OSKit is not portable to more than IA-32 systems. More is
>> predicted, but I rather think that it has been predicted for several
>> years now, to no avail.
>
>Also note that keeping the OSKit up to date requires extensive
>knowledge of both *BSD and Linux and following their respective
>development processes intently. Understanding the changes being made
>to the entire kernel structure of both systems, in parallel, is a very
>difficult undertaking. Managing to unglue these parts and meld them
>into the OSKit is equally nontrivial. Doing this alone, even just
>once to bring things up to date before you develop your OS, is
>unreasonable at best.

Ah, yes, right you are.

There is the theory that UDI (Uniform Driver Interface)
<http://www.projectudi.org/> could provide a more "universal" way of
coping with this; that seems more to be an attempt by the commercial
UNIX folk to try to lure the Linux developers to create device drivers
that "True UNIXes" can use as well. Which is, in a sense, the same
goal that the OSKit has at heart...

>> The approach that seems _rather_ more "production-worthy," at this
>> point, is that of building a "Lisp System" by layering a Lisp-based
>> set of user space tools atop a kernel coming from Linux or one of the
>> BSDs.
>
>Which is what I mentioned earlier. I figure that replacing the user
>space of a Linux system in an incremental fashion will succeed where
>all other attempts have failed. Linux is already the most-ported OS
>in history. Unreasonable amounts of support for all sorts of generic,
>crufty, and crappy hardware is already available, and the list is
>growing longer as I type. If a Scheme-based user space was
>implemented atop this then the only thing that would need porting
>would be the compiler and interpreter, and total portability would be
>achieved, modulo programs depending on hardware support, which I think
>would be few.

One think I'd see as plausible is that there _could_ be some value in
having some "hacks" added to the kernel that would be supportive of
the "Lisp Environment" needs.

1. It might be a neat idea to have a Lisp-based equivalent to the
/proc virtual filesystem.

On Linux (and Solaris, and possibly others...), you can head to
the /proc directory and see a directory hierarchy that can be
queried to get kernel-level information about system
configuration. In some cases, you can drop data into the files
and change kernel settings.

Wouldn't It Be Neat to have a Lisp-oriented interface where these
would be mapped onto a tree of "association lists" that one could
explore from within the Lisp environment?

A patch to allow this to be dealt with on "Lisp terms" could even
be fed back to the Offical Kernel Tree.

>> >If you don't like that stuff, you can replace it with something written in any
>> >good Unix-based Scheme *without* getting into the mess of doing your own
>> >OS. The init process can be anything you want it to be; its architecture is
>> >not baked into the Unix kernel design. Its job is one very well suited to
>> >Scheme. As are things like inetd and sendmail -- a Scheme-based mail system
>> >would be a fine thing.
>
>Agree. An MTA is one of the things I'd like to tackle after replacing
>init and friends.

Inetd would be interesting to "redo;" I am rather _less_ excited about
replacements for Sendmail when there are already so many of them.

>> - It is based on LIBC5, and uses C and FORTH as the base programming
> ^^^^^ not so good -- this means binary
> incompatibility with the newer Linux systems who
> use glibc-2 aka libc-6.
>> languages.

There are not _massive_ merits to being LIBC5-based; I'd argue in
favor of GLIBC 2.1, as it is _far_ more portable.

>(It's nice to see FORTH now and then, though. It's been undercover
>with minimal press since the days of the 8 bit micros...)

Forth is well-suited to the purpose at hand:
-> It provides a model that makes it natural to have both
"interpreted" and "compiled" forms.

>> Unfortunately, there is a dearth of Schemes that compile directly to
>> machine code; that is rather more common with Forth.
>
>My current problem. I'm afraid of having to write a compiler that
>generates *independently* linkable and loadable objects before I have
>anything to work with. I've not written something of this complexity
>before, and I'm worried that it will never get to a point of
>usability. I'll get mired in the Turing Tarpit as it were, and not
>able to move on to the real goal.
>
>By `independently' I mean a native binary object that can be used just
>like the typical .o file generated by a C compiler. A raw binary
>object that can be linked to libraries and executed independent of any
>existing Scheme implementation. This way Scheme doesn't have to be
>running before anything happens, and we escape the situation that the
>Lisp Machine OS (and its descendants) was in, that Lisp had to be
>started first before anything else could happen, and that namespace
>pollution was almost inevitable, even with a powerful module system.

What The World Probably Needs is a Scheme parser for GCC, so that
you'd do:

% gcc -c some_schemefile.scm -O3
some_schemefile.o
%

Various of the Scheme systems provide Scheme-to-C translations which
might do the trick, albeit with the blemish that you have to be aware
of doing C #include configuration along with any Scheme configuration.
Stalin gets cited a lot, but it seems to be _incredibly_ consumptive
of memory, so I am skeptical that it will ever be of general interest.
--
Rules of the Evil Overlord #13. "I will be secure in my
superiority. Therefore, I will feel no need to prove it by leaving
clues in the form of riddles or leaving my weaker enemies alive to
show they pose no threat."
<http://www.eviloverlord.com/lists/overlord.html>
cbbr...@ntlug.org- <http://www.hex.net/~cbbrowne/lsf.html>

Joe Marshall

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
"felix" <fe...@anu.ie> writes:

> Joe Marshall wrote in message ...
> >
> >Baker suggested a trick where you never pop the C stack but just let
> >it grow in one direction. When you fall off the end, you run the
> >garbage collector to evacuate the continuations off the stack and then
> >use LONGJMP to clear the stack. This gives you proper tail recursion
> >*and* first-class continuations in one whack, bypassing at least some
> >of the problems with using C.
> >
>
>
> It's not quite proper: the C stack still grows, so you keep allocating
> memory (for the C stack-frame, which is build anyway) even if you
> are in a tight loop that does not cons as such.

Since you are discarding it at the rate you are allocating it, it is
properly tail recursive at the Scheme level. What it is at the C
level is another thing.

James A. Crippen

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
g...@cobalt.transmeta.com (Guillermo 'Bill' J. Rozas) writes:

> ja...@fredbox.com (James A. Crippen) writes:
>

> > I had thought that the compiler didn't actually generate independently
> > executable code, but code only loadable into the interpreter. In that
> > case the interpreter would have to be loaded and running for anything
> > else to happen, which would slow the boot process down quite a bit on
> > slower machines (like mine).
>
> A minor correction. The interpreter is not needed (although it is
> always there). It wouldn't be hard to splice it out.
>
> The MIT Scheme runtime system is needed. This is composed of both a
> library written in C (pretty minimal but includes GC and the guts of
> call-with-current-continuation) and a large library written in Scheme
> and compiled.

So what you're saying, and correct me if I'm wrong, is that the guts
of the Scheme system can be linked with much as a typical shared
object library? Or treated as such, in any case?



> You don't need any interpreted code or interpreter -- in fact, when
> you start MIT Scheme, there isn't any interpreted code.

Yes, I had gathered that after tinkering with it some time ago.



> The compiler doesn't produce independently-executable code, but at a
> similar level neither does your C compiler -- you need anything from
> crt0.o to the C library (including stdio, stdlib, etc.) in Unix, and
> similarly in Windows (that's what most DLLs are about).

Indeed. What I was sort of hoping for was compiled machine code
objects that could be converted to ELF binaries for linking and
executing. So that the usual collection of binary manipulation tools
could be used on them, and that they would be similar to the output of
Unix compilers everywhere. I get the same from my f77 compiler (I
compiled ADVENT not too long ago, worked perfectly), and I figure if a
compiler is generating a code object and the guts of the Scheme system
are available as a shared object library then I could work with Scheme
binaries in the same manner as all other binaries on the system.



> What MIT Scheme doesn't have is a linking loader separate from the
> interactive one -- again, totally orthogonal from interpretation.

Orthogonal from interpretation because interpretation wouldn't require
any other sort of linking loader?

What I'm really looking for, and I'm not sure if I said this already,
is a Scheme system that doesn't have to be *running* to execute Scheme
programs. C doesn't have to be running for me to execute a C program.
I want something which behaves similarly. A compiler which produces
objects suitable for a linker which can produce libraries and
executables for use by ld.so. Something which merges seamlessly with
the existing Unix structure.

'james

James A. Crippen

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
d...@mail.lns.cornell.edu (Daniel S. Riley) writes:

> ja...@fredbox.com (James A. Crippen) writes:

> > Linux is already the most-ported OS in history.
>

> In terms of number of platforms supported, NetBSD runs on more
> platforms than Linux--about the only place Linux has an edge is
> support for different i386 configurations. The NetBSD release cycle
> also tends to be more stable than Linux's, which might make it a
> better target for replacing userland.

Someone else mentioned this to me in an email and I told him that I
would take NetBSD under serious consideration. As I said to him, my
main issue with using it will likely be personal, that I'll be using a
Linux system for development and a separate drive for abuse. If
NetBSD can be booted via Lilo then I will have no qualms about using
it at all, other than it will take me longer to develop anything since
I'm not familiar with using NetBSD aside from the occasional login so
I don't know anything about the boot process or other intricacies.

I also considered trying to maintain a certain level of platform
independence, supporting more than one platform. This may or may not
be feasible, depending entirely on what sort of back-breaking
gymnastics the different platforms will push me into. If I could get
this running on different kernels then we'd have a big win. But I
have suspicions that this is more difficult than it may appear at
first.

'james

John Clonts

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
James A. Crippen wrote:
>
> cbbr...@news.hex.net (Christopher Browne) writes:
>
[snipped gobs]

> > --
> > A student, in hopes of understanding the Lambda-nature, came to
> > Greenblatt. As they spoke a Multics system hacker walked by. "Is it
> > true", asked the student, "that PL-1 has many of the same data types
> > as Lisp?" Almost before the student had finished his question,
> > Greenblatt shouted, "FOO!", and hit the student with a stick.
>
> Replace PL/1 with C. Much more current, that. Same damned problem.
> "FOO!" *smack*
>
> Does anyone know the actual event behind this koan?
>
> 'james

My question is even easier: "Can someone explain what this koan
*means*?"

Thanks,
John

John Clonts

unread,
Apr 28, 2000, 3:00:00 AM4/28/00
to
Christopher Browne wrote:
>
> Centuries ago, Nostradamus foresaw a time when John Clonts would say:

> >James A. Crippen wrote:
> >> cbbr...@news.hex.net (Christopher Browne) writes:
> >>
> >[snipped gobs]
> >> > --
> >> > A student, in hopes of understanding the Lambda-nature, came to
> >> > Greenblatt. As they spoke a Multics system hacker walked by. "Is it
> >> > true", asked the student, "that PL-1 has many of the same data types
> >> > as Lisp?" Almost before the student had finished his question,
> >> > Greenblatt shouted, "FOO!", and hit the student with a stick.
> >>
> >> Replace PL/1 with C. Much more current, that. Same damned problem.
> >> "FOO!" *smack*
> >>
> >> Does anyone know the actual event behind this koan?
> >>
> >> 'james
> >
> >My question is even easier: "Can someone explain what this koan
> >*means*?"
>
> If you have to ask, you obviously don't understand the Lambda-nature.
> :-)

Which is exactly why I asked

Christopher Browne

unread,
Apr 29, 2000, 3:00:00 AM4/29/00
to
Centuries ago, Nostradamus foresaw a time when John Clonts would say:
>James A. Crippen wrote:
>> cbbr...@news.hex.net (Christopher Browne) writes:
>>
>[snipped gobs]
>> > --
>> > A student, in hopes of understanding the Lambda-nature, came to
>> > Greenblatt. As they spoke a Multics system hacker walked by. "Is it
>> > true", asked the student, "that PL-1 has many of the same data types
>> > as Lisp?" Almost before the student had finished his question,
>> > Greenblatt shouted, "FOO!", and hit the student with a stick.
>>
>> Replace PL/1 with C. Much more current, that. Same damned problem.
>> "FOO!" *smack*
>>
>> Does anyone know the actual event behind this koan?
>>
>> 'james
>
>My question is even easier: "Can someone explain what this koan
>*means*?"

If you have to ask, you obviously don't understand the Lambda-nature.
:-)

--
"There is no reason anyone would want a computer in their home". --
Ken Olson, Pres. and founder of Digital Equipment Corp. 1977
cbbr...@hex.net - - <http://www.ntlug.org/~cbbrowne/lsf.html>

Joseph Dale

unread,
Apr 29, 2000, 3:00:00 AM4/29/00
to
John Clonts wrote:

>
> Christopher Browne wrote:
> >
> > Centuries ago, Nostradamus foresaw a time when John Clonts would say:
> > >James A. Crippen wrote:
> > >> cbbr...@news.hex.net (Christopher Browne) writes:
> > >>
> > >[snipped gobs]
> > >> > --
> > >> > A student, in hopes of understanding the Lambda-nature, came to
> > >> > Greenblatt. As they spoke a Multics system hacker walked by. "Is it
> > >> > true", asked the student, "that PL-1 has many of the same data types
> > >> > as Lisp?" Almost before the student had finished his question,
> > >> > Greenblatt shouted, "FOO!", and hit the student with a stick.
> > >>
> > >> Replace PL/1 with C. Much more current, that. Same damned problem.
> > >> "FOO!" *smack*
> > >>
> > >> Does anyone know the actual event behind this koan?
> > >>
> > >> 'james
> > >
> > >My question is even easier: "Can someone explain what this koan
> > >*means*?"
> >
> > If you have to ask, you obviously don't understand the Lambda-nature.
> > :-)
>
> Which is exactly why I asked

Hmm... Perhaps you should contemplate this koan:

"A monk asked Joshu, a Chinese Zen master: "Has a dog Buddha-nature or
not?" Joshu answered: "Mu."

thi

unread,
Apr 29, 2000, 3:00:00 AM4/29/00
to
ja...@fredbox.com (James A. Crippen) writes:

> What I'm really looking for, and I'm not sure if I said this already,
> is a Scheme system that doesn't have to be *running* to execute Scheme
> programs. C doesn't have to be running for me to execute a C program.

this is not entirely correct. grep your system for crt0, etc.

> I want something which behaves similarly. A compiler which produces
> objects suitable for a linker which can produce libraries and
> executables for use by ld.so. Something which merges seamlessly with
> the existing Unix structure.

you could implement an analogous srt0...

thi

Moshe Zadka

unread,
Apr 29, 2000, 3:00:00 AM4/29/00
to
On Fri, 28 Apr 2000 23:54:41 -0500,
John Clonts <jcl...@mastnet.net> wrote:
>> >> Does anyone know the actual event behind this koan?
>> >>
>> >> 'james
>> >
>> >My question is even easier: "Can someone explain what this koan
>> >*means*?"
>>
>> If you have to ask, you obviously don't understand the Lambda-nature.
>> :-)
>
>Which is exactly why I asked

You *can't* explain a koan. If you could, it wouldn't be a koan. The path to
true enlightment is long and hard. Sit in front of your computer. Program
in Scheme. Read the koan. Meditate. You will then achieve enlightment.

-- But how do I know I achieved enlightment?
-- You will understand the koan.

John Clonts

unread,
Apr 29, 2000, 3:00:00 AM4/29/00
to
Joseph Dale wrote:
>
> John Clonts wrote:
> >
> > Christopher Browne wrote:
> > >
> > > Centuries ago, Nostradamus foresaw a time when John Clonts would say:
> > > >James A. Crippen wrote:
> > > >> cbbr...@news.hex.net (Christopher Browne) writes:
> > > >>
> > > >[snipped gobs]
> > > >> > --
> > > >> > A student, in hopes of understanding the Lambda-nature, came to
> > > >> > Greenblatt. As they spoke a Multics system hacker walked by. "Is it
> > > >> > true", asked the student, "that PL-1 has many of the same data types
> > > >> > as Lisp?" Almost before the student had finished his question,
> > > >> > Greenblatt shouted, "FOO!", and hit the student with a stick.
> > > >>
> > > >> Replace PL/1 with C. Much more current, that. Same damned problem.
> > > >> "FOO!" *smack*
> > > >>
> > > >> Does anyone know the actual event behind this koan?
> > > >>
> > > >> 'james
> > > >
> > > >My question is even easier: "Can someone explain what this koan
> > > >*means*?"
> > >
> > > If you have to ask, you obviously don't understand the Lambda-nature.
> > > :-)
> >
> > Which is exactly why I asked
>
> Hmm... Perhaps you should contemplate this koan:
>
> "A monk asked Joshu, a Chinese Zen master: "Has a dog Buddha-nature or
> not?" Joshu answered: "Mu."

What does Mu mean?

What is referred to by Buddha Nature?

Thanks,
John

John Clonts

unread,
Apr 29, 2000, 3:00:00 AM4/29/00
to
Moshe Zadka wrote:
>
> On Fri, 28 Apr 2000 23:54:41 -0500,
> John Clonts <jcl...@mastnet.net> wrote:
> >> >> Does anyone know the actual event behind this koan?
> >> >>
> >> >> 'james
> >> >
> >> >My question is even easier: "Can someone explain what this koan
> >> >*means*?"
> >>
> >> If you have to ask, you obviously don't understand the Lambda-nature.
> >> :-)
> >
> >Which is exactly why I asked
>
> You *can't* explain a koan. If you could, it wouldn't be a koan. The path to
> true enlightment is long and hard. Sit in front of your computer. Program
> in Scheme. Read the koan. Meditate. You will then achieve enlightment.
>
> -- But how do I know I achieved enlightment?
> -- You will understand the koan.

Ok, I guess I didn't know what a koan *was*.

So Thank you, at least now I know that my question is silly.

So I must use some other approach to even approach the question.

Does my lack of understanding of the Lambda Nature the reason that my
mind reels at this, that I saw on comp.object the other day:

(define <
(y
(lambda (lesser)
(lambda (x)
(lambda (y)
(((if-then-else (is-zero x))
(lambda ()
(((if-then-else (is-zero y))
(lambda () false))
(lambda () true))))
(lambda ()
(((if-then-else (is-zero y))
(lambda () false))
(lambda () ((lesser (predecessor x))
(predecessor y)))))))))))

I cannot even figure out how to *read* it, i.e. what words or images to
formulate as I come across these nested lambda's. I have been studying
through SICP, and I don't see this ["idiom?" | "style?" ] used. Is this
a "lispy" style that has some different typical way of expressing in
scheme?

Ok, there are probably several levels of "over-my-head-edness" that I'm
in here, but maybe someone will throw this blind pig an enlightening
acorn anyway.

Cheers,
John

Shriram Krishnamurthi

unread,
Apr 29, 2000, 3:00:00 AM4/29/00
to
David Rush <dr...@netscape.com> writes:

> 6) a (ML-ish) module system that admits categorical
> composition of functionality

Got that and bettered. Next.

'shriram

Joe Marshall

unread,
Apr 29, 2000, 3:00:00 AM4/29/00
to
ja...@fredbox.com (James A. Crippen) writes:

> g...@cobalt.transmeta.com (Guillermo 'Bill' J. Rozas) writes:
>

> > ja...@fredbox.com (James A. Crippen) writes:
> >

> > > I had thought that the compiler didn't actually generate independently
> > > executable code, but code only loadable into the interpreter. In that
> > > case the interpreter would have to be loaded and running for anything
> > > else to happen, which would slow the boot process down quite a bit on
> > > slower machines (like mine).
> >
> > A minor correction. The interpreter is not needed (although it is
> > always there). It wouldn't be hard to splice it out.
> >
> > The MIT Scheme runtime system is needed. This is composed of both a
> > library written in C (pretty minimal but includes GC and the guts of
> > call-with-current-continuation) and a large library written in Scheme
> > and compiled.
>
> So what you're saying, and correct me if I'm wrong, is that the guts
> of the Scheme system can be linked with much as a typical shared
> object library? Or treated as such, in any case?

The guts of the MIT Scheme system expect there to be some runtime
support for such operations as GC, CONS, file i/o, etc. In theory,
any runtime that matches the api that MIT Scheme expects would be
suitable. In practice, there is exactly one of these: the MIT Scheme
`microcode'. But if you were to write your own support layer, you
could run the rest of MIT Scheme.

> > The compiler doesn't produce independently-executable code, but at a
> > similar level neither does your C compiler -- you need anything from
> > crt0.o to the C library (including stdio, stdlib, etc.) in Unix, and
> > similarly in Windows (that's what most DLLs are about).
>
> Indeed. What I was sort of hoping for was compiled machine code
> objects that could be converted to ELF binaries for linking and
> executing. So that the usual collection of binary manipulation tools
> could be used on them, and that they would be similar to the output of
> Unix compilers everywhere.

You're not going to find that to easily. Your best bet is something
that compiles Scheme to C, then runs it through the standard tool chain.

> I get the same from my f77 compiler (I
> compiled ADVENT not too long ago, worked perfectly), and I figure if a
> compiler is generating a code object and the guts of the Scheme system
> are available as a shared object library then I could work with Scheme
> binaries in the same manner as all other binaries on the system.

No, the Scheme binaries are expecting a Scheme loader, not an ELF or
COFF loader.

> What I'm really looking for, and I'm not sure if I said this already,
> is a Scheme system that doesn't have to be *running* to execute Scheme
> programs.

Isn't that a contradiction? I would expect my scheme system to have
to be running to execute scheme programs.


> C doesn't have to be running for me to execute a C program.

The C compiler doesn't, but C linker and runtime does. As it happens,
the C linker and runtime are usually built in to kernel.

> A compiler which produces
> objects suitable for a linker which can produce libraries and
> executables for use by ld.so. Something which merges seamlessly with
> the existing Unix structure.

You want a Scheme compiler that produces ELF output. I don't know of any.

Paolo Amoroso

unread,
Apr 29, 2000, 3:00:00 AM4/29/00
to
On 28 Apr 2000 11:47:24 -0400, David Rush <dr...@netscape.com> wrote:

> I've just *got* to disagree with this. I haven't felt as close to the
> silicon as I do in Scheme for *years*. Once you get it into your

[...]


> Now, I'm not saying that R5RS Scheme is a systems programming
> language, but it's not very far removed from being one. The changes

Aubrey Jaffer, the author of SCM, is already using Scheme for accessing the
bare metal, i.e. for writing device drivers. Check the documents starting
from:

http://www-swiss.ai.mit.edu/~jaffer/SIMSYNCH.html


Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/

James A. Crippen

unread,
Apr 29, 2000, 3:00:00 AM4/29/00
to
John Clonts <jcl...@mastnet.net> writes:

> Ok, I guess I didn't know what a koan *was*.

A koan is a riddle used in Zen Buddhism (and other related forms) to
pose a question or problem to the student whose answer transcends
ordinary human logical thought. The student is expected to find some
answer or reply to the koan in a certain manner that only his master
can recognize. Hopefully after a few of these the student learns to
think extrarationally, and potentially experiences satori (the
blinding flash of enlightenment many people describe) and, with time,
total enlightenment. The typical koan is a Japanese haiku or similar
poetical verse; short, strong, and endlessly open to interpretation
and examination.

AI koans, while somewhat tongue-in-cheek and self-mocking, are in the
same vein. They typically describe events in the life of masters in
AI and related fields, similarly to how Zen koans describe events in
the life of revered Zen masters. They describe some sort of belief or
idea held by the AI community, and are frequently extralogical in the
fashion of traditional Zen koans.

You may wish to see the related entry in the Jargon file,
http://www.tuxedo.org/~esr/jargon/html/entry/AI-koans.html
. The other three entries listed should be helpful as well.



> So Thank you, at least now I know that my question is silly.

It isn't really silly. You just touched on a subject that defies
typical rational analysis. Though perhaps this could have been
explained a bit better, as newcomers are easily confused by such
blatantly irrational things popping up in the context of solidly
rational pursuits such as programming.



> So I must use some other approach to even approach the question.

Spending time immersed in the Scheme language (and related Lispish and
functional languages) and in historical documents of the AI lab
cultures will invariably assist you in understanding not only the
superficial culture of the hackish crowd, its lore, and history, but
also will aid you in understanding the underlying ideas and motives
that produced such languages as Scheme. Probably one of the simpler
windows into the culture is a dated but still relevant paper by
Richard Gabriel,
http://www.ai.mit.edu/docs/articles//good-news/good-news.html
. This is a somewhat non-technical, short paper describing why the
Lisp community as of ten years ago failed to succeed as well as it
could have in the face of microcomputers, Unix, and poorly educated C
hackers. It's well worth the read, and RPG not only defends his
thesis well but brings up many sensitive topics rarely broached
amongst the Lisp hacker community.



> Does my lack of understanding of the Lambda Nature the reason that my
> mind reels at this, that I saw on comp.object the other day:
>
> (define <
> (y
> (lambda (lesser)
> (lambda (x)
> (lambda (y)
> (((if-then-else (is-zero x))
> (lambda ()
> (((if-then-else (is-zero y))
> (lambda () false))
> (lambda () true))))
> (lambda ()
> (((if-then-else (is-zero y))
> (lambda () false))
> (lambda () ((lesser (predecessor x))
> (predecessor y)))))))))))
>
> I cannot even figure out how to *read* it, i.e. what words or images to
> formulate as I come across these nested lambda's. I have been studying
> through SICP, and I don't see this ["idiom?" | "style?" ] used. Is this
> a "lispy" style that has some different typical way of expressing in
> scheme?

What it entails most predominantly is the use of the Y combinator,
inherited (as is the entire language design) from Alonzo Church's
lambda calculus, a logical formalism first presented in the 1930s as a
system for exploring the foundations of mathematics. At this task it
failed due to some mathematical reasons that would be somewhat
unreasonable to explain to someone not familiar with the calculus
already. However it succeeded quite astonishingly in later years as
developments in computing proceeded. Alan Turing proved that a
function which was definable in the lambda calculus was equivalent to
an algorithm computable by a Turing machine, thus showing that the
lambda calculus could be used as a foundation for the study of
computability. Later the lambda calculi (there are actually numerous
variations on the same theme) were used to develop a language for
denotational semantics, the study of how to assign and prove the
meaning (semantics) of programming languages. Today it is still used
for all of these pursuits as well as for explorations in compilation,
functional languages, and much more. The lambda calculi and related
theories merit investigation entirely on their own as well, and
learning to understand the lambda calculi is not only rewarding, but
because of its relative simplicity (as compared to other mathematical
subjects) is remarkably simple to the person with an affinity for
logical thought. I myself have taught the rudiments to a few people
with little background in mathematics or computer science, and all
have found it entertaining and fascinating.

Enough propaganda. The Y combinator is a prime example of functions
called `fixed point combinators'. A fixed point is the point in a
function's domain which is equal to the corresponding point in its
range. That is, suppose a function f which maps from a set A to a set
B, that is `f: A -> B' . A fixed point of f is an x in A that equals
f(x). So you get input which is exactly the same as the output. Y is
particularly strange -- it is termed the `paradoxical combinator'
because *any* expression which you apply Y to will result in a fixed
point for that expression.

This function is a description of the `lesser than' predicate using
the Y combinator and Church numerals.

It's much more sensible once you understand the lambda calculus. If
you aren't interested in doing that then don't worry too much about
it. To become a truly wizardly Scheme programmer you will likely need
to study the lambda calculus intensively at some point. It will teach
you much more about functional programming than any programming
language can.

> Ok, there are probably several levels of "over-my-head-edness" that
> I'm in here, but maybe someone will throw this blind pig an
> enlightening acorn anyway.

As I said, you hit not one but several complicated topics. I hope
I've at least clarified some things for you, but it's more likely that
you're even more confused than you were. All I can say is, don't
despair. I feel the same way and I've been nosing around in these
subjects for four or five years now.

To the group: Perhaps I've not explained things as best as I can, or
possibly gotten a few things wrong. I apologize in advance. My
learning is far from complete, and indeed I hope shall never reach
completion.

'james

Rainer Joswig

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to
In article <390AF3EA...@mastnet.net>, John Clonts
<jcl...@mastnet.net> wrote:

> So I must use some other approach to even approach the question.
>

> Does my lack of understanding of the Lambda Nature the reason that my
> mind reels at this, that I saw on comp.object the other day:
>
> (define <
> (y
> (lambda (lesser)
> (lambda (x)
> (lambda (y)
> (((if-then-else (is-zero x))
> (lambda ()
> (((if-then-else (is-zero y))
> (lambda () false))
> (lambda () true))))
> (lambda ()
> (((if-then-else (is-zero y))
> (lambda () false))
> (lambda () ((lesser (predecessor x))
> (predecessor y)))))))))))
>

Read more about "Lambda Calculus" and the Y combinator.

Christopher Browne

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to
Centuries ago, Nostradamus foresaw a time when Paolo Amoroso would say:

>On 28 Apr 2000 11:47:24 -0400, David Rush <dr...@netscape.com> wrote:
>
>> I've just *got* to disagree with this. I haven't felt as close to the
>> silicon as I do in Scheme for *years*. Once you get it into your
>[...]
>> Now, I'm not saying that R5RS Scheme is a systems programming
>> language, but it's not very far removed from being one. The changes
>
>Aubrey Jaffer, the author of SCM, is already using Scheme for accessing the
>bare metal, i.e. for writing device drivers. Check the documents starting
>from:
>
> http://www-swiss.ai.mit.edu/~jaffer/SIMSYNCH.html

Device _drivers_?

Or hardware design tools?

The latter appears to be more the case than the former.
--
Academics denigrating "Popularizers"

"During the rise of the merchant class, the landed aristocracy
understood the value of creating food, but didn't appreciate that food
isn't valuable unless it reaches hungry mouths.

New ideas aren't valuable unless they reach hungry minds. "
-- Mark Miller

Christopher Browne

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to

Rob Warnock

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to
Christopher Browne <cbbr...@hex.net> wrote:
+---------------

| > http://www-swiss.ai.mit.edu/~jaffer/SIMSYNCH.html
|
| Device _drivers_?
| Or hardware design tools?
+---------------

He pointed you at the wrong link, perhaps. Look up one level at:

<URL:http://www.swiss.ai.mit.edu/~jaffer/Work.html>

and then down at the last two links there...


-Rob

-----
Rob Warnock, 41L-955 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. PP-ASEL-IA
Mountain View, CA 94043

Bruce Hoult

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to
In article <ln1wds3...@lambda.unlambda.com>, ja...@fredbox.com (James
A. Crippen) wrote:

> Spending time immersed in the Scheme language (and related Lispish and
> functional languages) and in historical documents of the AI lab
> cultures will invariably assist you in understanding not only the
> superficial culture of the hackish crowd, its lore, and history, but
> also will aid you in understanding the underlying ideas and motives
> that produced such languages as Scheme. Probably one of the simpler
> windows into the culture is a dated but still relevant paper by
> Richard Gabriel,
> http://www.ai.mit.edu/docs/articles//good-news/good-news.html
> . This is a somewhat non-technical, short paper describing why the
> Lisp community as of ten years ago failed to succeed as well as it
> could have in the face of microcomputers, Unix, and poorly educated C
> hackers. It's well worth the read, and RPG not only defends his
> thesis well but brings up many sensitive topics rarely broached
> amongst the Lisp hacker community.

I read this paper some years ago, but just now re-read it.

My questions for the assembled masses is:

- where are we now, ten years later?
- What is this "next LISP" of which he speaks? (in
<http://www.ai.mit.edu/docs/articles/good-news/subsection3.3.6.html>

-- Bruce

Paolo Amoroso

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to
On Sun, 30 Apr 2000 00:10:12 GMT, cbbr...@knuth.brownes.org (Christopher
Browne) wrote:

> Device _drivers_?
>
> Or hardware design tools?

Both. You have already seen the page on hardware design tools. Here is the
information on using Scheme for low level stuff, including writing device
drivers:

http://www-swiss.ai.mit.edu/~jaffer/scm95-2.html

I said "Check the documents starting _from_ ..." because at the time I
wrote my article I was offline and had only the top URL handy.

John Clonts

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to
James,

Thank you for this very cogent reply.

This, along with an e-mail from David Stone, has at least cleared up a
few things, and given me some bearings for future study.

Cheers,
John

James A. Crippen

unread,
Apr 30, 2000, 3:00:00 AM4/30/00
to
Joe Marshall <jmar...@alum.mit.edu> writes:

> "felix" <fe...@anu.ie> writes:
>
> > Joe Marshall wrote in message ...
> > >
> > >Baker suggested a trick where you never pop the C stack but just let
> > >it grow in one direction. When you fall off the end, you run the
> > >garbage collector to evacuate the continuations off the stack and then
> > >use LONGJMP to clear the stack. This gives you proper tail recursion
> > >*and* first-class continuations in one whack, bypassing at least some
> > >of the problems with using C.
> >
> > It's not quite proper: the C stack still grows, so you keep allocating
> > memory (for the C stack-frame, which is build anyway) even if you
> > are in a tight loop that does not cons as such.
>
> Since you are discarding it at the rate you are allocating it, it is
> properly tail recursive at the Scheme level. What it is at the C
> level is another thing.

Remember that gcc is supposed to guarantee output code that is
properly tail recursive. So the behavior should be the same at both
levels, at least with gcc. Or at least that's what I remember ... but
I haven't read the info files in a long time.

'james

Joe Marshall

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
ja...@fredbox.com (James A. Crippen) writes:

GCC can generate tail-recursive code, but I don't think there is a
guarantee. There are too many constructs in C that stymie proper tail
recursion (and C++ is worse).

~jrm


Guillermo 'Bill' J. Rozas

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
ja...@fredbox.com (James A. Crippen) writes:
>
> Remember that gcc is supposed to guarantee output code that is
> properly tail recursive. So the behavior should be the same at both
> levels, at least with gcc. Or at least that's what I remember ... but
> I haven't read the info files in a long time.

There are lots of cases that gcc doesn't handle right.
Of course, you may be able to restrict your code generator to the
cases that it does handle, but I've found this to be difficult.

David Rush

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
ja...@fredbox.com (James A. Crippen) writes:
> Joe Marshall <jmar...@alum.mit.edu> writes:
> > "felix" <fe...@anu.ie> writes:
> > > Joe Marshall wrote in message ...
> > > It's not quite proper: the C stack still grows, so you keep allocating
> > > memory (for the C stack-frame, which is build anyway) even if you
> > > are in a tight loop that does not cons as such.
> > Since you are discarding it at the rate you are allocating it, it is
> > properly tail recursive at the Scheme level. What it is at the C
> > level is another thing.
> Remember that gcc is supposed to guarantee output code that is
> properly tail recursive.

My memory of this is that it is properly tail-recursive only within a
single function, but not acroos procedural boundaries.

> So the behavior should be the same at both
> levels, at least with gcc. Or at least that's what I remember ... but
> I haven't read the info files in a long time.

Ditto. And it may have changed to support
the-language-that-must-not-be-named's pathetic attempts at automatic
storage management (since destructors can (and frequently do) have
side-effects). Caveat lector: I've not read Bjarne's latest edition of
the spec in enough detail to see if there's enough wiggle room for
tail-recursive destruction.

david rush
--
Who doesn't have enough neural activity yet to think of a good tagline


David Rush

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
John Clonts <jcl...@mastnet.net> writes:
> James A. Crippen wrote:
> > cbbr...@news.hex.net (Christopher Browne) writes:
> > > A student, in hopes of understanding the Lambda-nature, came to
> > > Greenblatt. As they spoke a Multics system hacker walked by. "Is it
> > > true", asked the student, "that PL-1 has many of the same data types
> > > as Lisp?" Almost before the student had finished his question,
> > > Greenblatt shouted, "FOO!", and hit the student with a stick.
> >
> > Replace PL/1 with C. Much more current, that. Same damned problem.
> > "FOO!" *smack*
> > Does anyone know the actual event behind this koan?

> My question is even easier: "Can someone explain what this koan
> *means*?"

FOO! *smack*

david rush
--
The lambda-nature which can be spoken is not the true lambda-nature...

David Rush

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
John Clonts <jcl...@mastnet.net> writes:
> Joseph Dale wrote:
> > John Clonts wrote:
> > > Christopher Browne wrote:
> > > > Centuries ago, Nostradamus foresaw a time when John Clonts would say:

> > > > >James A. Crippen wrote:
> > > > >> cbbr...@news.hex.net (Christopher Browne) writes:
> > > > >> > A student, in hopes of understanding the
> > > > >> >Lambda-nature, came to Greenblatt.
<snip>

> > > > >> > Greenblatt shouted, "FOO!", and hit the student with a stick.

> > > > >My question is even easier: "Can someone explain what this koan
> > > > >*means*?"

> > "A monk asked Joshu, a Chinese Zen master: "Has a dog Buddha-nature or


> > not?" Joshu answered: "Mu."
>
> What does Mu mean?

Mu = _|_ modulo (lambda-nature)

david rush
--
Divergent as always...

John Clonts

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
In article <w33aeia...@bellsouth.net>,

Luckily I have here recently been explained to regarding 'bottom' (IIRC
Joe Marshall, thanks again).

I am familiar with modulo when applied to integers, but have never fully
understood its use idiomatically to mean something like
"notwithstanding" or "other than" or "over and above" or "up to the
point of". I now realize that this crowd here at c.l.s are just the
ones who might do a good job of clarifying that for me... David?

Thus might I proceed on my journey of "backing into" the lambda nature
(which is the only way, right?).

Cheers,
John


Sent via Deja.com http://www.deja.com/
Before you buy.

David Rush

unread,
May 1, 2000, 3:00:00 AM5/1/00
to

Eagerly awaiting SRFI-21. And when are you guys going to implement my
first first five items? ;)

david rush
--
And I really do like PLT...

Barry Margolin

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
In article <brucehoult-30...@bruce.bgh>,

Bruce Hoult <bruce...@pobox.com> wrote:
>My questions for the assembled masses is:
>
> - where are we now, ten years later?

Not much better. The success of Unix, Windows, and C/C++ rather than MacOS
and Lisp are examples of the "worse is better" philosophy.

> - What is this "next LISP" of which he speaks? (in
> <http://www.ai.mit.edu/docs/articles/good-news/subsection3.3.6.html>

I don't think he had a specific Lisp in mind, but was describing an
approach to creating one. However, his description seems to have quite a
bit in common with ISLisp, the dialect that was being developed by the ISO
Lisp working group (Gabriel was X3J13's representative to that group for
several years), and Dylan also seems to have adopted some of this approach.

--
Barry Margolin, bar...@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Scott Ribe

unread,
May 1, 2000, 3:00:00 AM5/1/00
to

Bruce Hoult wrote:
>
> - where are we now, ten years later?

Losing even more precious years because of the bottomless well of money
Sun uses to promote Java!

> - What is this "next LISP" of which he speaks?

Dylan, in my opion.

John Clonts

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
In article <w33em7m...@bellsouth.net>,

David Rush <ku...@bellsouth.net> wrote:
> John Clonts <jcl...@mastnet.net> writes:
> > James A. Crippen wrote:
> > > cbbr...@news.hex.net (Christopher Browne) writes:
> > > > A student, in hopes of understanding the Lambda-nature,
came to
> > > > Greenblatt. As they spoke a Multics system hacker walked by.
"Is it
> > > > true", asked the student, "that PL-1 has many of the same data
types
> > > > as Lisp?" Almost before the student had finished his
question,
> > > > Greenblatt shouted, "FOO!", and hit the student with a stick.
> > >
> > > Replace PL/1 with C. Much more current, that. Same damned
problem.
> > > "FOO!" *smack*
> > > Does anyone know the actual event behind this koan?
>
> > My question is even easier: "Can someone explain what this koan
> > *means*?"
>
> FOO! *smack*
>
> david rush
> --
> The lambda-nature which can be spoken is not the true lambda-nature...
>

Let me rephrase my question (i.e. ask a different one(s)))

What are the datatypes in Scheme? Are they [ boolean, number, pair,
vector, string, char, integer, complex, real, symbol, promise,
continuation, procedure, macro] (ok, its a list of all the
"type-predicate-looking" functions in the LispMe built-ins)

What are the datatypes in PL/1, of which I have *no* familiarity? Are
they the same as C?

Does the Multics system hacker have anything to do with the point
(fixed-point?) of this koan?

Michael Hudson

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
David Rush <ku...@bellsouth.net> writes:

> Ditto. And it may have changed to support
> the-language-that-must-not-be-named's pathetic attempts at automatic
> storage management (since destructors can (and frequently do) have
> side-effects). Caveat lector: I've not read Bjarne's latest edition
> of the spec in enough detail to see if there's enough wiggle room
> for tail-recursive destruction.

Well, you know at compile time whether any local variables have
destructors or not. If they don't, fine; it's a tail call. If they
do it's not - it's a bit like

(with-open-file (a "blah")
(do-something-with a))

in common lisp (or binding a special variable, come to that);
do-something-with isn't a tail call.

And what's the good of a non-side-effecting destructor? That one
escapes me!

Cheers,
M.

--
93. When someone says "I want a programming language in which I
need only say what I wish done," give him a lollipop.
-- Alan Perlis, http://www.cs.yale.edu/~perlis-alan/quotes.html

Tim Moore

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
On 28 Apr 2000, James A. Crippen wrote:
> I'm not sure about the BSD implementation, but in ITS ISTR any system
> call could not only be restarted, but totally backed out of such that
> the system call seemed to never actually have happened. The feeling
> of the ITS hackers is that if this was already done once there's no
> reason for anyone not to implement it again, since the brain work of
> inventing it has already been done. Nevermind the fact that all the
> ITS source was written in an incompatible (sorta) version of the
> PDP-10 assembly language (which had many features of the higher level
> languages of the time, in fact), and that the PDP-10 instruction set
> had certain aspects that were hard to duplicate on other platforms.
> And that much of the code to ITS is impossible to read without
> commentary from the original authors.
>
> There's a paper about PCLSRing written by Alan Bawden that I can't
> seem to recall. But if you search for his name and the string "PCLSR"

http://www.inwap.com/pdp10/pclsr.txt, as others have said. Thanks for the
pointer to a very interesting paper. Yeah, Unix doesn't do "that." It's
a little unclear to me what "interrupt handler" means in ITS, whether it's
more or less the same as a signal handler in Unix or code that runs in
Exec mode. Also, it seems like programs in ITS mucked with each other's
state in ways that just don't happen in Unix today, except when using a
debugger; alternative mechanisms like signals are used. Whether that's a
good or bad thing probably depends on whether or not you're an embittered
ITS hacker :)

I believe that restartable system calls and the fact that most system
calls are not interruptable in Unix today gets one most of the way to
consistency and not having to check in user code whether or not a system
call has been interrupted in the normal course of things. Of course, that
"most of the way" is the whole ITS side of the argument.

Tim

Michael T. Richter

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
Barry Margolin <bar...@genuity.net> wrote in message
news:MAhP4.37$_B6.737@burlma1-snr2...

> The success of Unix, Windows, and C/C++ rather than MacOS
> and Lisp are examples of the "worse is better" philosophy.

MacOS isn't exactly an example to hold up as "better is better". It's just
as bogus as any other GUI API that I've seen. (I haven't looked into BeOS'
GUI stuff yet, so I'm holding out some hope of finding a clean, consistent,
powerful GUI API.)


Rob Myers

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
> From: "Michael T. Richter" <m...@igs.net>
> Organization: Bell Solutions
> Reply-To: "Michael T. Richter" <m...@ottawa.com>
> Newsgroups: comp.lang.scheme,comp.lang.lisp,comp.lang.dylan
> Date: Mon, 01 May 2000 17:04:43 GMT
> Subject: Re: The Lambda Nature

It's not bad for 1984...

MacOS X has modern APIs and a cool new look. d2c can generate "Carbon"
compatible code, so programming MacOSX in Dylan should be cool.

- Rob.


Michael Hudson

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
John Clonts <joh...@my-deja.com> writes:

> Let me rephrase my question (i.e. ask a different one(s)))
>
> What are the datatypes in Scheme? Are they [ boolean, number, pair,
> vector, string, char, integer, complex, real, symbol, promise,
> continuation, procedure, macro] (ok, its a list of all the
> "type-predicate-looking" functions in the LispMe built-ins)
>
> What are the datatypes in PL/1, of which I have *no* familiarity? Are
> they the same as C?
>
> Does the Multics system hacker have anything to do with the point
> (fixed-point?) of this koan?

So far as I understand it, the reason the student got hit with the
stick is that IN SEEKING TO UNDERSTAND THE LAMBDA-NATURE he asked "is
it true that ...". I presume that if had had just asked the question
out of intellectual curiosity, his hide would not have suffered so.

Have you read SICP?

Cheers,
M.

--
59. In English every word can be verbed. Would that it were so in
our programming languages.

Barry Margolin

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
In article <LOiP4.1385$CL3....@198.235.216.4>,

Michael T. Richter <m...@ottawa.com> wrote:
>Barry Margolin <bar...@genuity.net> wrote in message
>news:MAhP4.37$_B6.737@burlma1-snr2...
>> The success of Unix, Windows, and C/C++ rather than MacOS
>> and Lisp are examples of the "worse is better" philosophy.
>
>MacOS isn't exactly an example to hold up as "better is better". It's just
>as bogus as any other GUI API that I've seen. (I haven't looked into BeOS'
>GUI stuff yet, so I'm holding out some hope of finding a clean, consistent,
>powerful GUI API.)

My respect is for the underlying Macintosh OS architecture. It's very well
modularized, and Apple did a good job providing both high-level interfaces
for most applications and a reasonable number of low-level hooks where
needed. The API to the GUI itself has some warts, but it's not too bad.

Also, I was trying to use an example that most people could relate to.
Personally, I consider Multics and Genera much better, but most readers
would not know enough about them to understand what's so great about them.

Erann Gat

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
In article <ln1wds3...@lambda.unlambda.com>, ja...@fredbox.com (James
A. Crippen) wrote:

> John Clonts <jcl...@mastnet.net> writes:
>
> > Ok, I guess I didn't know what a koan *was*.
>
> A koan is a riddle used in Zen Buddhism (and other related forms) to
> pose a question or problem to the student whose answer transcends
> ordinary human logical thought. The student is expected to find some
> answer or reply to the koan in a certain manner that only his master
> can recognize. Hopefully after a few of these the student learns to
> think extrarationally, and potentially experiences satori (the
> blinding flash of enlightenment many people describe) and, with time,
> total enlightenment. The typical koan is a Japanese haiku or similar
> poetical verse; short, strong, and endlessly open to interpretation
> and examination.

I'd like to offer another perspective on Koans.

The ability to ask the question, "What does it mean?" and make sense of
the reply is a powerful tool for learning the meanings of things. In fact,
it is so powerful that it is easy to fall into the trap of thinking that it
is the best, or even the only tool for learning the meanings of things. In
such a mindset the apparently cryptic answers offered up to the question
"What does it mean?" when applied to koans can be frustrating, which is
precisely the point. The purpose of a koan is to get you out of the
mindset that the best way to understand the meaning of something is to
have it *explained*. The Zen mindset holds that there are truths beyond
explanation, beyond language, and even beyond inquiry. Hence the Zen
saying, "If you are seeking it then you are far from the Way." Likewise,
if you can explain it, then it is not the Buddha Nature.

Here's another way to look at it: At some point in one's life one must
somehow learn to ask the question "What does it mean?" and make sense
of the reply *without* being able to ask the question and make sense
of the reply. The process by which one does *that* is *also* a powerful
tool for learning the meanings of things. The point of a koan is to
force you to go back to learning things that way. (To the extent that
it is possible to capture the point of Zen in words, which is to say not
at all, it is to learn how to learn using the process that you
use to learn your first words. By definition it is a process that does
not involve words, which of course makes it very difficult to explain.)

Erann Gat
g...@jpl.nasa.gov

John Clonts

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
In article <m3ln1uj...@atrus.jesus.cam.ac.uk>,

Michael Hudson <mw...@cam.ac.uk> wrote:
> John Clonts <joh...@my-deja.com> writes:
>
> > Let me rephrase my question (i.e. ask a different one(s)))
> >
> > What are the datatypes in Scheme? Are they [ boolean, number, pair,
> > vector, string, char, integer, complex, real, symbol, promise,
> > continuation, procedure, macro] (ok, its a list of all the
> > "type-predicate-looking" functions in the LispMe built-ins)
> >
> > What are the datatypes in PL/1, of which I have *no* familiarity?
Are
> > they the same as C?
> >
> > Does the Multics system hacker have anything to do with the point
> > (fixed-point?) of this koan?
>
> So far as I understand it, the reason the student got hit with the
> stick is that IN SEEKING TO UNDERSTAND THE LAMBDA-NATURE he asked "is
> it true that ...". I presume that if had had just asked the question
> out of intellectual curiosity, his hide would not have suffered so.
>

Oh, so he is admonished for thinking that truth and falsehood actually
exist, whereas "in truth" they really don't? I'll have to ponder that a
while, obviously.

> Have you read SICP?
>

Well, I'm only on chapter 3 of the exercises. I scanned the rest a
while back. Is there something in there about data types that I don't
recall? Or rather are you referring to the overall lambda-nature that
SICP will impart to me eventually, if I can stick with it. "It will make
you strong-- unless it kills you".

> Cheers,
> M.
>
> --
> 59. In English every word can be verbed. Would that it were so in
> our programming languages.
> -- Alan Perlis, http://www.cs.yale.edu/~perlis-alan/quotes.html

I happy that quote. But I feel like y'all are lambda-spaghettiing my
brain!

Michael Hudson

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
John Clonts <joh...@my-deja.com> writes:

> In article <m3ln1uj...@atrus.jesus.cam.ac.uk>,
> Michael Hudson <mw...@cam.ac.uk> wrote:
> > So far as I understand it, the reason the student got hit with the
> > stick is that IN SEEKING TO UNDERSTAND THE LAMBDA-NATURE he asked "is
> > it true that ...". I presume that if had had just asked the question
> > out of intellectual curiosity, his hide would not have suffered so.
> >
>
> Oh, so he is admonished for thinking that truth and falsehood actually
> exist, whereas "in truth" they really don't? I'll have to ponder that a
> while, obviously.

No, my reading is that he got hit for thinking that knowing about the
data structures of lisp would lead to understanding the lambda-nature
(sorry, I should have put more inside the quotes).



> > Have you read SICP?
> >
>
> Well, I'm only on chapter 3 of the exercises. I scanned the rest a
> while back. Is there something in there about data types that I don't
> recall? Or rather are you referring to the overall lambda-nature that
> SICP will impart to me eventually, if I can stick with it. "It will make
> you strong-- unless it kills you".

The latter.

Cheers,
M.

--
81. In computing, turning the obvious into the useful is a living
definition of the word "frustration".

David Rush

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
Michael Hudson <mw...@cam.ac.uk> writes:

> David Rush <ku...@bellsouth.net> writes:
> Well, you know at compile time whether any local variables have
> destructors or not. If they don't, fine; it's a tail call. If they
> do it's not

Which misses the case where invoking the destructor is
tail-recursively OK.

> And what's the good of a non-side-effecting destructor? That one
> escapes me!

Depends on whether you consider explicit storage management a
side-effect or a language bug, I guess.

david rush
--
Closing open files is *definitely* a side-effect...

David Rush

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
John Clonts <joh...@my-deja.com> writes:
> In article <w33em7m...@bellsouth.net>,
> David Rush <ku...@bellsouth.net> wrote:
> > John Clonts <jcl...@mastnet.net> writes:
> > > James A. Crippen wrote:
> > > > cbbr...@news.hex.net (Christopher Browne) writes:
> > > > > A student, in hopes of understanding the Lambda-nature,
...

> > > > > Greenblatt shouted, "FOO!", and hit the student with a stick.
> > The lambda-nature which can be spoken is not the true lambda-nature...

> Let me rephrase my question (i.e. ask a different one(s)))


>
> What are the datatypes in Scheme?

They are not relevant to the Lambda-nature...

> What are the datatypes in PL/1, of which I have *no* familiarity?
> Are they the same as C?

_ _
All data are functions, except for _|_ and |
Types are the essence of relationship
The lambda-nature is bug-free

> Does the Multics system hacker have anything to do with the point
> (fixed-point?) of this koan?

Does the backup-tape contain the lambda nature if it is never read?

david rush
--
Who always thought that koans were mostly about maintaining the
authority of the priesthood in the scary devil monastery...
...too much coffee!

David Rush

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
John Clonts <joh...@my-deja.com> writes:
> In article <m3ln1uj...@atrus.jesus.cam.ac.uk>,
> Michael Hudson <mw...@cam.ac.uk> wrote:
> > John Clonts <joh...@my-deja.com> writes:
> > > Let me rephrase my question (i.e. ask a different one(s)))
> > So far as I understand it, the reason the student got hit with the
> > stick is that IN SEEKING TO UNDERSTAND THE LAMBDA-NATURE he asked "is
> > it true that ...". I presume that if had had just asked the question
> > out of intellectual curiosity, his hide would not have suffered so.

Oh, I doubt *that*, but to say further would be to violate the
Usenet-nature of this newsgroup...

david rush
--
Whose meta-level processor is about to dump core...

David Rush

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
David Rush's brain farted:
> Mu == _|_ modulo (lambda-nature)

Tom Ivar Helbekkmo <tih...@kpnQwest.no> writes:


> John Clonts <joh...@my-deja.com> writes:
> > I am familiar with modulo when applied to integers, but have never

> > fully understood its use idiomatically [...]
>
> It's really simple, actually. 13 modulo 5 is 3, right? Change the
> word order a bit, and you can say that 13 is 3, modulo 5.

Errr. 13 is *not* 3 mod 5. From my number-theory days (which was not
my best subject) 13 is *equivalent to* 3 mod 5. 'modulo' is not an
operator in the same way that *, /, div, rem, and even, *shudder* mod[1]
(we've got a serious naming problem here). Essentially, in the
expression:

13 == 3 modulo 5

The 'modulo' change the domain of discussion from being, say, the
natural numbers, a five-element cyclic group. This is very similiar to
the notion of modulating a song, where you take its harmonic
components and express them in a key.

> This could be rephrased as "thirteen is three, except for all the
> fives". There you go -- in hacker jargon, "modulo" simply means
> "except".

Well, a modulus is a congruence constraint. 'except' is a loose
translation of the concept. I use it to assert that you must apply a
proper mapping of concepts in order to realize the truth-value of the
proposition.

david rush
--
[1] IIRC, the CS function mod(m, p) is defined as 'r' from the equation:

m = pq + r mod p

except when it isn't :(

David Rush

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
Tom Ivar Helbekkmo <tih...@kpnQwest.no> writes:
> David Rush <ku...@bellsouth.net> writes:
> > > > "A monk asked Joshu, a Chinese Zen master: "Has a dog Buddha-nature or
> > > > not?" Joshu answered: "Mu."
> > >
> > > What does Mu mean?
> > Mu = _|_ modulo (lambda-nature)
>
> Uh -- I've been told that the answer "mu" unasks the question.

Asserting that a computation diverges is effectively the same thing,
no? It sure seems to me that a kill -9 'un-runs' a computation (for
apropriate values of un-run).

> ...oh, and here's another Buddha-nature twist, by W. Sommerfeld:
>
> | A novice of the temple once approached the Chief Priest with a
> | question. "Master, does Emacs have the Buddha nature?" the novice
> | asked. The Chief Priest had been in the temple for many years and
> | could be relied upon to know these things. He thought for several
> | minutes before replying, "I don't see why not. It's got bloody well
> | everything else."

ROTFL!

david rush
--
Who is keeping his ride waiting...

John Clonts

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
David Rush wrote:
[snip] _ _

> david rush
> --
> Who always thought that koans were mostly about maintaining the
> authority of the priesthood in the scary devil monastery...
> ...too much coffee!

INDEED.

John Clonts

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
David Rush wrote:
>
> John Clonts <jcl...@mastnet.net> writes:
> > Joseph Dale wrote:
> > > John Clonts wrote:
> > > > Christopher Browne wrote:
> > > > > Centuries ago, Nostradamus foresaw a time when John Clonts would say:
> > > > > >James A. Crippen wrote:
> > > > > >> cbbr...@news.hex.net (Christopher Browne) writes:
> > > > > >> > A student, in hopes of understanding the
> > > > > >> >Lambda-nature, came to Greenblatt.
> <snip>

> > > > > >> > Greenblatt shouted, "FOO!", and hit the student with a stick.
>
> > > > > >My question is even easier: "Can someone explain what this koan
> > > > > >*means*?"
>
> > > "A monk asked Joshu, a Chinese Zen master: "Has a dog Buddha-nature or
> > > not?" Joshu answered: "Mu."
> >
> > What does Mu mean?
>
> Mu = _|_ modulo (lambda-nature)
>
> david rush
> --
> Divergent as always...

Heh-heh. Ok, so perhaps I should read this as

mu is bottom 'as-applied-to' the lambda nature.
or
mu is bottom 'except-for' the lambda nature.


??

Sigh,
John

John Clonts

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
Tom Ivar Helbekkmo wrote:

>
> David Rush <ku...@bellsouth.net> writes:
>
> > > > "A monk asked Joshu, a Chinese Zen master: "Has a dog Buddha-nature or
> > > > not?" Joshu answered: "Mu."
> > >
> > > What does Mu mean?
> >
> > Mu = _|_ modulo (lambda-nature)
>
> Uh -- I've been told that the answer "mu" unasks the question. In
> this specific instance, I take it to mean that the question itself
> proves that the student lacks the basic understanding needed to start
> contemplating the question of Buddha-nature.
>

Well, that sounds like me alright.

> Going back to the original koan, I would say that Greenblatt, using a
> reference to Joshu's well-known answer, is unasking the fundamentally
> wrong question the student poses; asking the master to compare the
> data type hierarchies of PL/1 (or C) and LISP shows that you do not
> understand data types.
>

Oh! *Is* Greenblatt referring intentionally to the the Joshu koan? So
Greenblatt's "Foo" is the Joshu's "Mu"? I originally thought the Joshu
koan was just a David Rush Divergence.

But now, (lambda-nature Foo) == (buddha-nature Mu) ==> 'bottom'

Ok, works for me.

Like the geneticist in Florida: "One thing we're absolutely certain of,
is that we have no idea why this works".

Maybe comparing datatypes in lisp and c is like comparing toenails of
eagles and elephants?

> Erik Naggum said it very well:
>
> | let me put it this simply: in Common Lisp, objects are typed,
> | variables are not. in the C tradition, variables are typed, objects
> | are not. if this doesn't hurt when you think about it, you have not
> | understood what it means for the purported object-orientedness of
> | C++ and Java.
>

(I wonder whose face he was grinding into the mud at the time)

> ...oh, and here's another Buddha-nature twist, by W. Sommerfeld:
>
> | A novice of the temple once approached the Chief Priest with a
> | question. "Master, does Emacs have the Buddha nature?" the novice
> | asked. The Chief Priest had been in the temple for many years and
> | could be relied upon to know these things. He thought for several
> | minutes before replying, "I don't see why not. It's got bloody well
> | everything else."
>

Heh-heh. But heh, Sun's got it too: the upcoming Java Buddhapi spec!

Cheers,
John

James A. Crippen

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
g...@jpl.nasa.gov (Erann Gat) writes:

> A. Crippen) wrote:
>
> > John Clonts <jcl...@mastnet.net> writes:
> >

Yes, yes. That's a much more sensible way of putting it. It's
interesting to contemplate that by trying to understand koans and
their purpose we are already far from understanding them.

There's no point to understanding a koan because you can't. That's
what you're supposed to understand. And if you think you can
conciously understand that then you obviously don't understand. The
more you think about your situation relative to the koan you're trying
to understand the more you realize that you're just trying to convince
yourself that you're something you're not. A koan can be the start of
a vast journey into realms of conciousness and thought far removed
from our typical western view. And all this can be done while sitting
on a bench at the transit station while a drunk pukes beside you.

You look at the puke and you realize that he's puking. And that this
is much more essential to your life than a silly koan. And if you're
paying attention you would instantly realize what you've been grasping
after all this time and not coming close to getting.

Bother. Something like that.



> Here's another way to look at it: At some point in one's life one must
> somehow learn to ask the question "What does it mean?" and make sense
> of the reply *without* being able to ask the question and make sense
> of the reply. The process by which one does *that* is *also* a powerful
> tool for learning the meanings of things. The point of a koan is to
> force you to go back to learning things that way. (To the extent that
> it is possible to capture the point of Zen in words, which is to say not
> at all, it is to learn how to learn using the process that you
> use to learn your first words. By definition it is a process that does
> not involve words, which of course makes it very difficult to explain.)

And hence the more you try to explain it the further you travel from
the original idea, because your quest for explanation takes you into
more and more complicated semantic spheres, which is what you were
trying to get away from in the first place.

Talking about it is not a good thing. Better to stare at a wall and
stop thinking. Then you'll begin to stop understanding...

'james

James A. Crippen

unread,
May 1, 2000, 3:00:00 AM5/1/00
to
John Clonts <jcl...@mastnet.net> writes:

> David Rush wrote:
> >
> > John Clonts <jcl...@mastnet.net> writes:

> > > Joseph Dale wrote:
> > > > John Clonts wrote:
> > > > > Christopher Browne wrote:
> > > > > > Centuries ago, Nostradamus foresaw a time when John Clonts would say:
> > > > > > >James A. Crippen wrote:
> > > > > > >> cbbr...@news.hex.net (Christopher Browne) writes:
> > > > > > >> > A student, in hopes of understanding the
> > > > > > >> >Lambda-nature, came to Greenblatt.
> > <snip>
> > > > > > >> > Greenblatt shouted, "FOO!", and hit the student with a stick.
> >
> > > > > > >My question is even easier: "Can someone explain what this koan
> > > > > > >*means*?"
> >

> > > > "A monk asked Joshu, a Chinese Zen master: "Has a dog Buddha-nature or
> > > > not?" Joshu answered: "Mu."
> > >
> > > What does Mu mean?
> >
> > Mu = _|_ modulo (lambda-nature)
> >
>

> Heh-heh. Ok, so perhaps I should read this as
>
> mu is bottom 'as-applied-to' the lambda nature.
> or
> mu is bottom 'except-for' the lambda nature.

...

You follow the lambda road determinedly as it enters the deep,
foreboding woods. All around you are giant B\"ohm trees, their labels
hanging down and brushing the top of your head, their infinite
eta-expansions stretching high upwards into a complex continuous
lattice of branches forming a ceiling to the forest. All around you
reduction paths twist this way and that, joining the road and leaving
it again, splitting off from some overgrown expression to disappear in
the dark, shadowy distance. You can barely make out unrecognizable
symbols in a myriad of strange alphabets visible here and there behind
the thick undergrowth of redexes. The wind whispers with the sound of
concentrating logicians through the trees.

Suddenly, a combinator appears in front of you!
It looks angry.
The combinator yells at you, threatening to apply you and bind your free
variables!
You wield a numeric function.
The combinator attacks you with an automorphism!
You feel the same.
You stab the combinator with your numeric function.
The combinator staggers!
The combinator wields a reduction.
The combinator tries to eta-reduce your bound variables!
Your pack feels lighter!
You slash the combinator with your numeric function leaving a Church
numeral behind.
The combinator bleeds heavily.
You pull the Barendregt grimoire from your pack and search for a spell.
The combinator throws a nasty-looking lambda-model at you!
The lambda-model strikes your leg!
You bleed heavily!
You read the Hindley-Rosen lemma aloud.
The combinator is confused!
You grapple with the combinator and apply it to _|_.
The combinator vanishes in a puff of logic!

...

That's lambda nature in its wildest form, perhaps.

'james

David Rush

unread,
May 2, 2000, 3:00:00 AM5/2/00
to
ja...@fredbox.com (James A. Crippen) writes:

> g...@jpl.nasa.gov (Erann Gat) writes:
> A koan can be the start of
> a vast journey into realms of conciousness and thought far removed
> from our typical western view. And all this can be done while sitting
> on a bench at the transit station while a drunk pukes beside you.
>
> You look at the puke and you realize that he's puking. And that this
> is much more essential to your life than a silly koan. And if you're
> paying attention you would instantly realize what you've been grasping
> after all this time and not coming close to getting.

Yes, yes. Which is why you must write code to understand the
lambda-nature, although code does not have it. ;) This may be the
problem with certain CS degree programs. <generalization type=gross>
It may also explain why such a high proportion of *really good*
programmers (at least in my experience) come from other disciplines
(electrical engineering and physics come to mind very
quickly).</generalization>

david rush
--
Who studied Computer Engineering at Uni (CWRU), not Computer Science...

James A. Crippen

unread,
May 2, 2000, 3:00:00 AM5/2/00
to
David Rush <ku...@bellsouth.net> writes:

Hoho. Good point, sir. Indeed, many Computer Science students,
especially those who wish to go on into professional programming or
administration, seem to lose all sight of the Nature of their work.
To many of them it is just work. Not exploration, not a chance to
expand their universe, nor any other life-affirming and life-improving
effort. The ones who decide to head into acedemia seem to understand
this a bit better, but there are always a percentage who are there
because they couldn't make it in the Real World. Programmers from
other disciplines have more perspective and more understanding of what
is behind the editor that they write their code in. But then, such a
generalization can quite probably be made about any sort of person who
changes their profession.

'james


Brian Harvey

unread,
May 3, 2000, 3:00:00 AM5/3/00
to

I should know better than to reply to trolls, but at least this isn't
cross-posted to 20 newsgroups...

Anyway, this idea about non-CS programmers having a deeper understanding
of (especially) programming language design issues seems to me not well
supported by the facts. Non-CS engineers still do a lot of programming
in Fortran, for the excellent reason that they have good scientific
computation libraries in Fortran -- but it's not a very elegant language,
certainly not displaying anything I'd want to call "Lambda nature"!

And, here at Berkeley, the engineers have just recently decided they
should move past Fortran, so they've chosen Matlab as their new
programming language. At least this is quite a bit more Lambda-naturish,
since it encourages functional programming, but it's still full of ad-hoc
features, not "removing the obstacles that make additional features
seem necessary."

Joe Marshall

unread,
May 3, 2000, 3:00:00 AM5/3/00
to
ja...@fredbox.com (James A. Crippen) writes:

> Programmers from other disciplines have more perspective and more
> understanding of what is behind the editor that they write their
> code in.

That statement is absurd.

Erann Gat

unread,
May 3, 2000, 3:00:00 AM5/3/00
to
In article <uem7j7...@alum.mit.edu>, Joe Marshall
<jmar...@alum.mit.edu> wrote:

When discussing the Lambda nature, all statements are absurd. Discussion
is absurd. Absurdity is absurd. Foo! <Whack!> ;-)

E.

Russell Wallace

unread,
May 3, 2000, 3:00:00 AM5/3/00
to
John Clonts wrote:
>
> In article <w33em7m...@bellsouth.net>,
> David Rush <ku...@bellsouth.net> wrote:
> > FOO! *smack*
> >
> > david rush
> > --
> > The lambda-nature which can be spoken is not the true lambda-nature...

Piffle :)

Me, I agree with whoever it was said that if you can't explain something
to a barmaid, it means you don't really understand it.

> Let me rephrase my question (i.e. ask a different one(s)))
>

> What are the datatypes in Scheme? Are they [ boolean, number, pair,
> vector, string, char, integer, complex, real, symbol, promise,
> continuation, procedure, macro] (ok, its a list of all the
> "type-predicate-looking" functions in the LispMe built-ins)

Pretty much, yes.

Some of those (e.g. real, char) are basically the same as the
corresponding C data types, some of them (e.g. pair, string, procedure)
have significant differences (let me know if you're interested in what
those are). Continuations don't really have an equivalent data type in
C, though some applications of them can be duplicated with longjmp().

> What are the datatypes in PL/1, of which I have *no* familiarity? Are
> they the same as C?

Dunno, I've no familiarity with PL/1 either. I'd be surprised if they
were very different, since the languages are medium closely related.

> Does the Multics system hacker have anything to do with the point
> (fixed-point?) of this koan?

I'm curious about that one myself.

--
"To summarize the summary of the summary: people are a problem."
Russell Wallace
mailto:rwal...@esatclear.ie

James McCartney

unread,
May 7, 2000, 3:00:00 AM5/7/00
to
In article <LOiP4.1385$CL3....@198.235.216.4>, "Michael T. Richter"
<m...@ottawa.com> wrote:

>(I haven't looked into BeOS'
> GUI stuff yet, so I'm holding out some hope of finding a clean,
> consistent,
> powerful GUI API.)
>

Hopefully making the following remarks will not make me enemies at Be,
but can serve as constructive criticism on how to improve the OS.

I have worked extensively with BeOS and its GUI API and
have been involved in two commercial applications.
One shipped (www.lcsaudio.com) and is a very successful product, the
other didn't.

IMO Be's strength is their OS kernel.
The threading model and priorities really work well.

Everyone loves the GUI API. At first. It's great for making toy programs.
Most people never work on large projects so never run into the
limitations. It is just not as well designed as Smalltalk or Nextstep or
MacApp. I think that this is the reason why there are still not very
many large commercial apps on it.

The UI framework seems to have been coded to a set of specs rather than
having had a paradigm in mind of how everything should work. Many basic
Design Patterns are not available directly in the framework meaning
everyone rolls their own.

The decision to force one thread per window on the programmer causes
needless synchronization implementation complexity for many apps.

There is also the C++ fragile base class problem. Be has already broken
binary compatibility once after they had committed to freezing it.
Many classes are padded out with dummy variables and virtual functions
for future expansion. For an OS that touts being a non-legacy OS, it has
a real legacy creating demon lurking in it in the form of the FBC
problem.

One of the things that is most celebrated about Be's framework is the
BMessage which implements IPC, dynamic binding, drag and drop,
copy-paste, etc. Most of those who celebrate it have never used a
dynamic language like Smalltalk or Objective-C where these features come
along with the language's built in message passing. So in a BeOS app you
are writing a lot of switch() statements to handle BMessages because C++
has no built in dynamic binding messaging. Writing these switch
statements becomes a real pain.

On top of that there is Be's scripting architecture which overcomes the
inabilities of C++ to query the messages an object can respond to, or
store messages for later sending.

So basically you have in the BeOS API a crude implementation in C++ of a
dynamic messaging system which requires a significantly greater
notational burden than if a dynamic language had been used.

I must say that at the beginning I was a true BeOS convert, and I
couldn't understand what all these Nextstep people were ranting about.
Well into my first large program I began to see the light.

All that said, BeOS is still potentially the best OS for real time media
applications because of its threading model.

Kenneth Dickey

unread,
May 11, 2000, 3:00:00 AM5/11/00
to
> > On 24 Apr 2000 19:53:22 -0800, The Almighty Root <ja...@fredbox.com> wrote:
> >
> > > Hey, what do you know! I've got all that already. Too bad there's
> > > no operating system written in my favorite language...

Sorry, I am not a regular reader of this list (no time 8^() so the following may be considered as "random input".

There have been some interesting OS research done in Scheme dating back to Mitch Wand's seminal paper:
=============>
Mitchell Wand. Continuation-Based Multiprocessing. Higher-Order and Symbolic Computation,
12(3):285--299, October 1999. Originally appeared in the 1980 Lisp Conference.
ftp://ftp.ccs.neu.edu/pub/people/wand/papers/hosc-99.ps

Abstract: Any multiprocessing facility must include three features: elementary exclusion, data protection, and process saving. While elementary exclusion must rest on some hardware facility (e.g. a test-and-set instruction), the other two requirements are fulfilled by features already present in applicative languages. Data protection may be obtained through the use of procedures (closures or funargs), and process saving may be obtained through the use of the CATCH operator. The use of CATCH, in particular, allows an elegant treatment of process saving.

We demonstrate these techniques by writing the kernel and some modules for a multiprocessing system. The kernel is very small. Many functions which one would normally expect to find inside the kernel are completely decentralized. We consider the implementation of other schedulers, interrupts, and the implications of these ideas for language design.
=============<

More Recently, the STING project was very interesting:

=============>
Suresh Jagannathan and James Philbin. A Customizable Substrate for Concurrent Languages. In ACM SIGPLAN '91 Conference on Programming Language Design and Implementation, June 1992.
http://www.neci.nj.nec.com/homepages/jagannathan/papers/pldi92.ps

Suresh Jagannathan and James Philbin. A Foundation for an Efficient Multi-Threaded Scheme System. In Proceedings of the 1992 Conference on Lisp and Functional Programming, June 1992.
http://www.neci.nj.nec.com/homepages/jagannathan/papers/lfp92.ps
=============<

Note also that PRESCHEME was designed for system's programming [ref Scheme48]:
=============>
Richard Kelsey. Pre-Scheme: A Scheme Dialect for Systems Programming.
ftp://ftp.nj.nec.com/pub/kelsey/prescheme.ps.gz
=============<

I have also been toying with the idea of using Scheme to build reliable systems (but perhaps on top of a Linix kernel). The major piece of work seems to be doing a front end for GCC and the associated 'back end' work for GDB that would allow building a decent GUI of the quality of some of the commercial Common Lisp implementations (e.g. MCL). That in turn would enable both a "seamless" C FFI and allow some of the energy flowing into Linux to be channelled incrementally into building more robust systems (see my "System Principles" scribblings, below).

I find much like particularly in Gambit, PLT/MzScheme, and Scheme48, depending on hw/system resources.

Note that you can today use Scheme's which compile to C to produce ".o"s which are "stand alone" with the work of duplicating (or linking in) the support library pieces required. In particular, Gambit's compiler info specifies which external routines are called and might be a convenient starting point. [I have only started looking at MzScheme and like the language and GPL'ed licence a lot but have not yet looked at the compiler & runtime requirements in that environment; I am also very interested in seeing what has been done with Gambit 4.0 when it is available (are you listening Marc?)].


$0.02,
-KenD

=============>
File: "System Principles"
Implements: Notes on System Principles for reliable, useful, personal
computing devices
Author: Ken Dickey
Date: 2000 March 20
Updated: 2000 March 21


SYSTEM PRINCIPLES FOR RELIABLE, USEFUL, PERSONAL COMPUTER SYSTEMS

Over time I have come to the conclusion that there needs to be a radical rethinking of 'consumer computing'. I had thought that the rise of the comsumer and networking would increase reliability and usability because of consumer demand. Instead it appears that 'consumers' now put up with crashing cell phones.

It is time to rethink the fundamental strategies we use to build computing systems implementing tools and devices for ordinary people.

In accord with modern thinking on health, one should think of 'self healing' systems.


- SELF CENTERING (homeostatic)

- Model of Self (endomorphic)
- Consistency monitoring; self check
- State & feedback driven centering procedures
- Both HW & SW components can be 'hot swaped'


- STATE CONSISTENCY

- Component & Data self check (incl checksums) at all levels
- State is Transacted
- Schema Evolution
- Component Update w Compatability (incl version) Constraints
[legal component set -> transcient -> legal component set]
- Backup is continuous (logfile) with transaction completions noted
- can always delete transcient state, replay inputs from a checkpoint
- if transactions are local, can undo system state to previous
[=> special markers for distributed transaction undo]
- Backup/undo a normal part of system behavior
- can decide to uninstall a newer version of a component and then
replay transactions with old component


- SELF DESCRIBING

- Can always answer the questions:
- What can I do here?
- What is going on? [Activities]
- What is there? (what hw & sw subsystems/components are installed?)
- What is this component doing?
- Why the delay? Is it sane? What can I do about it?
- Can I install/uninstall/update this component?
-> What is affected? What required? What provided?


- UI RELIABLE AND CONSISTENT [Visibility & Control]

- UI never 'blocks'; User always feels system is immediately responsive
- All user interface actions are visible
- Either immediate (visual) response or visual indicator of an
action (e.g. mouse click) queued on a specific component.
- Clean separation between UI layer (always responsive) and underlying
components/applications (may be waiting, or compute bound)
- UI able to move, resize, scale, display menu/click/mouse
events/actions w/o blocking on component/application response
- Universal undo/redo (modulo non-local transacted commits)
- Including component install/uninstall/update
- Component/App can always indicate what is it doing (continuously or
by request) even when it is compute bound or blocked.

==============================================================
QUESTIONS:

- What are the rules/rubrics for developers? How checked?
- Design for self-test, fault diagnosis & fault repair
- Break compute bound computations into chunks [Agenda control structure]
- Provide explanations of actions: Explanatory Model
- Undo/Redo & Transactions
- Compatibily Constraints
- Provides/Requires
- Save/restore

- What HW & SW Technologies support the above?
[E.g. (sw):
'Garbage Collection' (automatic storage reclamation)
is a powerful consistency checker;
Closures aid in undo/state consistency, agenda control structures;
Object System
Recoverable Exceptions
Lexical & Dynamic variables & Dynamic Wind
Full Programming Environment
et cetera.. ]

- Use partial evaluation and runtime code generation aid in reducing
system footprint in restricted resource environments (e.g. Embeded).

- What Baseline Components are Required?
- Ontology Database [Explanations; Provides/Requires] [XML]
- Language Runtime (sub)System(s)
- Constraint Checker
- Transaction System
- Component Manager; Data Manager
- Regulatory/Healing System
- Self Test System Manager/Scheduler
- Diagnostic System
- Repair System
- GUI [Manager]
- Event Manager
- IO Manager
- Time Manager
- Process Manager
- Device Manager
- Security Manager
- Per Task
- Storage Manager
Memory, Stable Storage
- Resource Custodians
- Threads
- Events [Eventspaces]
- IO
Files, Ports, Network Connections, Transactions


--- E O F ---

Brian Denheyer

unread,
May 11, 2000, 3:00:00 AM5/11/00
to

How about gambit with thread support (SRFI-19) running on top of a
copy of EROS ? *grin* Write up nano-x or other lightweight gui (with
threads, of course, not the old-technology event loop) and you have a
reliable, secure OS which can be extended with scheme. Cool.

Scheme48 would probably work well also, as would mzscheme. Rscheme is
another good possibility.

I NEED MORE TIME.

Brian

>>>>> "Kenneth" == Kenneth Dickey <ke...@earthlink.net> writes:

Kenneth> I have also been toying with the idea of using Scheme to
Kenneth> build reliable systems (but perhaps on top of a Linix
Kenneth> kernel). The major piece of work seems to be doing a front
Kenneth> end for GCC and the associated 'back end' work for GDB that
Kenneth> would allow building a decent GUI of the quality of some of
Kenneth> the commercial Common Lisp implementations (e.g. MCL).
Kenneth> That in turn would enable both a "seamless" C FFI and allow
Kenneth> some of the energy flowing into Linux to be channelled
Kenneth> incrementally into building more robust systems (see my
Kenneth> "System Principles" scribblings, below).

Kenneth> I find much like particularly in Gambit, PLT/MzScheme, and
Kenneth> Scheme48, depending on hw/system resources.

Kenneth> Note that you can today use Scheme's which compile to C to
Kenneth> produce ".o"s which are "stand alone" with the work of
Kenneth> duplicating (or linking in) the support library pieces
Kenneth> required. In particular, Gambit's compiler info specifies
Kenneth> which external routines are called and might be a
Kenneth> convenient starting point. [I have only started looking at
Kenneth> MzScheme and like the language and GPL'ed licence a lot but
Kenneth> have not yet looked at the compiler & runtime requirements
Kenneth> in that environment; I am also very interested in seeing
Kenneth> what has been done with Gambit 4.0 when it is available
Kenneth> (are you listening Marc?)].


Friedrich Dominicus

unread,
May 11, 2000, 3:00:00 AM5/11/00
to
There are good points in you mail just I can't help, what other
systems do you have checked? Why do you think you are alone with you
ideas and aren't much of those things implemented in mainframe OSes?

BTW you should check OpenGenera too. It's what one might call a
Lisp-machine, you can search for symbolics on the Internet. OpenGenera
should run on Alphas with True46Unix.

Anyway I disagree with the point that OSes are not reliable as they
are, I'm using Linux nearly exclusive and it worked nicely. So you may
have to state more clearly what you don't like.

Regards
Friedrich

Marc Feeley

unread,
May 11, 2000, 3:00:00 AM5/11/00
to
> Note that you can today use Scheme's which compile to C to produce
> ".o"s which are "stand alone" with the work of duplicating (or linking
> in) the support library pieces required. In particular, Gambit's
> compiler info specifies which external routines are called and might
> be a convenient starting point. [I have only started looking at
> MzScheme and like the language and GPL'ed licence a lot but have not
> yet looked at the compiler & runtime requirements in that environment;
> I am also very interested in seeing what has been done with Gambit 4.0
> when it is available (are you listening Marc?)].

The main additions in Gambit-C 4.0 are:

1) an efficient priority based real-time multithreading system; thread
operations are about 100 times faster than Java JDK threads and
1000 times faster than LinuxThreads

2) I/O is non-blocking (this is of course required for a thread system)
and much more efficient than before, roughly a factor of 10, because
buffering is done at the Scheme level

3) more efficient and "safe for space" implementation of
continuations (this was motivated by the thread system which uses
call/cc to save the thread state)

4) the command line interface supports line-editing and history

5) the compiler generates faster code when declarations are **not**
used

6) the runtime has been improved in many places, in particular
bignums are faster and more space efficient

Although the thread system has all the logic required to make it a
real-time multithreading system it relies on the base operating system
(Unix or Windows) to supply it with timer interrupts and do the
low-level I/O, so on the OSes you don't really get real-time. It
would be nice to replace this low-level with direct access to the
devices and this would provide a true real-time Scheme OS... that
would be neat! Anyone interested in working on this?

Marc

felix

unread,
May 11, 2000, 3:00:00 AM5/11/00
to

Marc Feeley wrote in message ...

>
>The main additions in Gambit-C 4.0 are:
>
> ... (lots of nice features)

Just one question: does it provide full support for multiple values?
I'm a big fan of SRFI-1 which uses multiple values extensively (and
especially in combination with call/cc). Furthermore: what about
dynamic-wind?

felix


John Perks

unread,
May 11, 2000, 3:00:00 AM5/11/00
to
On Thu, 11 May 2000, Marc Feeley wrote:
> 5) the compiler generates faster code when declarations are **not**
> used

How does that work!? Do you mean that when any variable (including
standard procedures) could be redefined to anything of any type, it runs
faster? Or is this another use of "declaration"?


Marc Feeley

unread,
May 11, 2000, 3:00:00 AM5/11/00
to
> On Thu, 11 May 2000, Marc Feeley wrote:
> > 5) the compiler generates faster code when declarations are **not**
> > used
>
> How does that work!? Do you mean that when any variable (including
> standard procedures) could be redefined to anything of any type, it runs
> faster?

It runs faster than in previous versions, not faster than when
declarations are used! In all versions up to Gambit-C 3.0 when no
declarations are used, the expression (car x) actually dereferences
the variable car and performs a procedure call. Not only does this
require a procedure call, but also in Gambit-C it is a cross module
call which is slow because of the support for tail-calls. In 4.0 the
expression (car x) will be compiled roughly as follows:

(car x) ==> (let ((proc car) (arg x))
(if (and (##eq? proc the-car-procedure)
(##pair? arg))
(##car arg)
(proc arg)))

where the-car-procedure has been bound to the car primitive and ##car
is an unsafe primitive that is inlined by the compiler (and similarly
for ##eq? and ##pair?). So in typical programs where the variable car
is not changed, the performance is much better (at the expense of
bigger code). Note that this preserves Scheme's semantics.

Marc

Kenneth Dickey

unread,
May 11, 2000, 3:00:00 AM5/11/00
to
Friedrich Dominicus wrote:
...

> Anyway I disagree with the point that OSes are not reliable as they
> are, I'm using Linux nearly exclusive and it worked nicely. So you may
> have to state more clearly what you don't like.

I am not sure who the 'you' refers to here, but on the assumption it is me, what I wrote was:


"I have also been toying with the idea of using Scheme to build reliable systems (but perhaps on top of a Linix kernel)."

So you see, I have expressed no major gripes about OS's. Perhaps I should list a few. My main point was that most computing systems are unusable by typical non-geeks. E.g. you have an ISP which you want to set up a PPP connection on Linux. One has to ask qustions like is the ISP using CHAP or PAP? You frequently have to deal with TCL, PERL, PYTHON, awk, sed, translit, ed, and all the other Unix Haters fodder. God help you if you need to rebuild the kernel to apply custom patches and have to figure out which consistent set of C support libraries and gcc/egcs compiler version to use and ... Now, you run out of battery on your laptop and then plug it in to the power trandformer. It immediately reboots and brings you to the point you were in less the last few seconds of typing, right? No way. How about data corruption as the last few bytes were being written by some app as the disk spun down? Do you recover the app state and data to the last stable point? No? When you have a number of background tasks going on and you 'click' on some app's screen area do you always know immediately that the 'click' was queued for that app--even if the app does not respond for 30 seconds (or longer)? No?

How about a system that saved your state and data reliably and had a usable interface? How about a system which showed you what was going on using mechanical models (e.g. pick the proper plug to the ISP port and see the 'fluid flow' of the data). How about a system which not only allowed you to see what was going on but also told you what you could do about it? Visibility and Control are the hallmarks of a good interface. All the leading Desktops these days is light years away from a video game in learnability or usability. OS support for building reliable systems is fair, but could use some help (e.g. system state, GC & transaction support along with file system journaling). So I really have gripes at the UI, Application, Library, and OS levels.

We do have the technology basics to build usable, reliable systems. The shame is that we are very, very far from doing it.

$0.02,
-KenD

Brian Denheyer

unread,
May 12, 2000, 3:00:00 AM5/12/00
to
>>>>> "Marc" == Marc Feeley <fee...@trex.IRO.UMontreal.CA> writes:

Marc> The main additions in Gambit-C 4.0 are:

... extremely good ones.


Marc> (Unix or Windows) to supply it with timer interrupts and do the
Marc> low-level I/O, so on the OSes you don't really get real-time. It
Marc> would be nice to replace this low-level with direct access to the
Marc> devices and this would provide a true real-time Scheme OS... that
Marc> would be neat! Anyone interested in working on this?

Well, I was 3/4 serious about scheme/EROS. Gambit's compiling
mechanism is so flexible that it would be very easy to hook it
directly into the low-level EROS services. I also mention EROS
because the "core" of the OS is really quite small. And it supports
checkpointing natively - a technology that's applicable to both the
desktop and things embedded..

It's those pesky day jobs that get in the way of having fun.


Brian


felix

unread,
May 12, 2000, 3:00:00 AM5/12/00
to

Kenneth Dickey wrote in message <391B43AA...@earthlink.net>...

>Friedrich Dominicus wrote:
>...
>> Anyway I disagree with the point that OSes are not reliable as they
>> are, I'm using Linux nearly exclusive and it worked nicely. So you may
>> have to state more clearly what you don't like.
>
> [...]

>
>We do have the technology basics to build usable, reliable systems. The
shame is that we are very, very far from doing it.


Far too many people (especially in the programming language/software
engineering)
community see Linux as the culmination of operating system (and software
environment) -technology. This may result from a number of reasons:

- There are not too many widely available alternatives
- Many programmers experienced the Unix/Linux software-environment as the
first really usable system after their Windows/DOS/Amiga/whatever days
- Many people take an awful pride in tinkering around with their system and
naturally regard that as superior what gives them most control and
customization opportunities.

Personally I hate being degraded to a systems administrator just because
I want to use a little bit of emacs and the gnu toolset under X windows. The
pain with
installation is just another thing. The constantly growing cancer of the
multitude
of initialization-, driver- and customization-scripts is yet another.

(sorry, I'm drifting off)

There were some really promising beginnings in the history of computer- and
programming interfaces: Lisp machines, Smalltalk/80, the Macintosh (some
years
ago). But now everything converges to the same goal: a legacy OS (be it
Unix-
or DOS based) that is conceptionally simple but ended up in an incredible
mess, and the continuous struggle to support as much hardware as possible.

That's what I like in Java: it just ignores many of the (really irrelevant)
details
of the underlying platform. Of course it has its own quirks, but wouldn't it
be
really nice to have a Java-chip in every computer, with a VM that is just a
tiny
little bit more enhanced to provide better support for higher-order
functions
(and throw in first-class continuations, just for the fun of it) ?
And I really hate Java as a programming language! (not quite as much as
Perl, perhaps :-).

To come to the point: Linux is not the answer! It just works kind of ok. But
for
a tremendous price. It's not usable by someone like my mother. And that is
more severe than it sounds :}

felix


It is loading more messages.
0 new messages